I think the 360° feedback has indeed become a rolling buzzword in Foggy Bottom and mission corridors. One of the bureau bees told me not too long ago that State takes very seriously the leadership qualities of its senior officers, most particularly those going to larger posts. As proof of this, she added, the bureau now routinely requires 360° feedback for all their DCM candidates (I have not heard that one about the CGs). Anecdotal evidence also shows that some posts are requesting “360° feedback” when bidding on some section chiefs positions. I must admit that I am not sure how much of the 360°s going on right now are done formally through FSI’s Leadership and Management School or HR but I have seen one-page documents that some posts are using as 360° questionnaires (apparently with questions excerpted from the guidance cable).
As much as I’m happy to see efforts invested in matching senior jobs with the right candidates, I have to say that this makes me feel like we’re flying on the seat of our pants. This is “corridor reputation” in Word documents – except that candidates, now get to pick which corridor you can walk into and which part of their reputation you can listen to. To me, the 360° feedback gives us two certainties: 1) that the top bird had at least 6-8 birds in the world who liked him/her and, 2) the “top bird is a jerk syndrome” had not been systematically ruled out because of #1. (Note: this image was forwarded to me and I could not locate the name of cartoonist; please tell me if you know).
I have very strong reservations about the use of the 360° feedback for purposes other than developmental. The main goal of the 360-degree process -- where you "go around" employees, asking supervisors, direct reports, peers and clients about their performance, is to raise the employees’ awareness of how they impact others and act as a catalyst for their change in behavior. Margaret Kubicek in Training Magazine also writes that:
“To have any genuine value or meaningful impact, 360-degree feedback must be far more than a standalone activity. It should involve managing the individual's expectations, aligning questionnaires to competency frameworks, setting goals to integrate the exercise into personal development plans and providing feedback from trained facilitators.”
Using the 360° feedback for evaluative purposes especially when a candidate’s next job is on the line can easily transform this useful learning tool into an inflated, useless material with real consequences for operational effectiveness. Inflated? Nah!! Below is my list of how to sexed up the 360° feedback:
A• Include people from my inner circle who can throw in harmless comments; this is needed to make feedback sound credible (e.g. “She works too darn hard for her own good at times.”)
B• Include people with whom I have excellent, solid relationships (no invites to ex-spouses or ex-bffs or those I head-banged with)
C• Include only those with excellent and effective communication skills who fits A & B
D• Include my junior protégées who thinks I’m Mother Teresa
E• Exclude peers who might be “borderline” raters; if I don’t know what you really think of me, you don’t make the list
F• Exclude FSN direct reports; if you have less than perfect English, you don’t make the list (see rule C)
Oops, pardon me - I think my cynical slip is showing here. But really, is it not that all you need is a carefully cultivated set of individuals, spread across the relevant rate groups, that you can then call on during the bid-time crunch?
I do realize that allowing employees to select their own raters increases the chance that the employees would internalize the feedback received and craft appropriate development plans. But since this is being used as a “placement” tool, allowing employees to select their own raters instead of randomizing raters from a larger pool almost certainly ensures that the Bureau only hears the upsides and almost none of the downsides of working with Mr. CG-to be or Ms. DCM-to be. My understanding is that in addition to the names that bidders propose as raters, the Bureaus can also ask additional individuals to provide reference input – the question is how often is that really done?
I think it is important that Bureaus have a good perspective of the bidders/candidates' people and leadership skills prior to sending them off to more responsible jobs. However, if State must use the 360 as a "review" or "reference" tool, it should refine the process as follows:
-» Randomize the rating pool to include the largest number of raters possible: the 6-8 raters should not be handpicked by the candidate. It is safe to assume that one can find 6-8 souls who would say Candidate X is fabulous even when Candidate X is a lousy boss. I do not have any argument with the number of raters required, more than eight feedback would probably be too much to handle; I just would like to see that the eight comes randomly selected from a realistic pool instead of a shallow tide pool.
-» Couple the “360” with the “lobby info” – almost all the individuals who have been lobbied to speak on behalf of the candidates are already positively inclined towards the candidates, or they won’t come calling, so balance this info with the randomized feedback to get a better view of each candidate.
-» Every candidate not selected for the job should be strongly advised to start a development plan (with assistance from a trained facilitator or coach from FSI’s LMS). For every CG or DCM or Section Chief successfully assigned, there are others waiting in the wings. Those afflicted with "micromanagetitis" or ailing as “screamers,” should get a chance to improve themselves under competent guidance. I heard that unsuccessful bidders get their feedback from their CDOs (Career Development Officers) but unless CDOs have the specialized skills to assist these candidates, this part of the process is not going to contribute to any sustained momentum for development.
Brian Murphy writing for the Training Journal says that the 360-degree feedback as a tool harbors inherent dangers, particularly if the appropriate guidance and interventions are not readily available. I fully agree and that's why it's necessary in my view not only to have competent facilitators for this but to also to ensure that raters know how to provide effective feedback. Finally, I think Murphy provides wise insight when he writes, “The gathering of feedback data is simply the starting point in the development cycle. Raised awareness in isolation rarely leads to changed behavior."