Experts in aligning organizational communication with business goals -  consulting worldwide on  internal and external communication.

 Free Advice Angela Sinickas Answers Questions from Communicators

Ask a Question | More Free Advice | Home

 
.....
Publications
- Measuring the Effectiveness of a Published Corporate Plan
-
Printed Versus Online Publications

Strategic Planning
- Aligning Communication Strategy with Business Metrics
-
Setting Measurable Objectives

Surveys
- Tips on Content and Wording of Surveys
-
Response Rates and Random Sampling
-
What's a Good Response Rate for an Intranet "Spot Poll"
-
When (Not) to Survey and the Role of Third Parties
-
Dealing with "Over-Surveying
"

.

.

.

.

.

.

. .

.

.

 

.

....
.
.

.

Measuring the Effectiveness of a Published Corporate Plan

 

Q: I work with Brisbane City Council, Australia's largest local government. Council has a strong commitment to communicating with its 1.2 million-strong target audience, and in June this year launched its most readable Corporate Plan ever. We are now in the throes of assessing the effectiveness of this report and would appreciate any assistance in identifying relevant criteria.

Regards,

Orla Thompson

.

A: Dear Orla:

Ideally, it would be wonderful to have some of the following types of measures on last year's Corporate Plan to compare against this year's. If you did measure anything specific last year, I would start by repeating those measures this year to see if there is any change. Here are some other types effectiveness measures you might try:

Outcomes

Not having seen the plan, I don't know what type of information is included in it, or why it is issued each year. However, I'm guessing that there are some desirable outcomes you'd like from the readers of the plan. I'd start there:

  • Do you want more or fewer of them to show up at Council meetings on any particular topic covered in the plan?
  • How many of them used a phone number or address the Plan provided for them to contact the Council with questions, comments or concerns?
  • Did you want them to vote a particular way on any public referendums?
  • Did you want them to change their opinions on certain issues that you track by some type of poll on an ongoing basis?

Whatever those ideal outcomes are, that's the first thing I would track and compare against the level they were at BEFORE the plan was issued. If there is an outcome that you want each year after the plan is issued, then try to compare your outcome this year against the outcome after last year's plan was issued.

Reading grade level test

Many word processing programs offer a reading grade level test to tell you how many years of formal education someone needs to have in order to understand the writing. It often appears as part of the grammar check under the Tools or Edit menu. See how well the grade level matches your target audience's average education level. Under a separate email, I'm attaching a copy of a worksheet you can use to do this by hand if your software doesn't offer this tool.

Phone or paper survey

Do a telephone survey resulting in at least 400 to 600 randomly selected respondents (but preferably more). First ask if they remember receiving the Plan. If not, that tells you how many either didn't receive it or tossed it out before figuring out exactly what it was. For the rest of the respondents who do remember receiving it, you can ask a series of readership survey type questions, such as:

  • How much of the Plan they actually read
  • Which types of information they prefer to read
  • How easy the writing is to understand
  • Whether the Plan was too long, too short or just right
  • How clear the section headings/headlines were
  • How effective the photos and illustrations are
  • How easy the layout is to follow
  • What the preferred distribution method is
  • The overall value of receiving the Plan

You can also ask them to what extent reading the Plan affected a series of potential behavioral or attitudinal outcomes (such as voted differently on an issue, discussed something they read with a neighbor or friend, changed their opinion on an issue, felt differently about the Council itself, etc.).

Starch Test focus groups

You could convene a series of Starch Test focus groups with randomly selected citizens who said that they read all or part of the Plan. In the session, you'd start by asking participants what they remember having been in the Plan (this is the unaided recall section of the test). Ask each person to write down their own list of the topics, pictures, headlines, anything that they remember having been in the Plan. Then debrief each one so you can see which elements had the greatest overall recall.

Then hand out to each participant a copy of the Plan and a worksheet that lists each section or element of the Plan on the left hand side of a table. The column headings of the table would say "Skipped," "Skimmed" and "Read Thoroughly." You then ask each participant to go over the Plan page by page and check mark one of the columns for each section of the Plan to indicate how they read it. Then you discuss with the group what types of sections they read thoroughly and WHY. What they skipped and WHY. This will tell you what exactly appealed to them about the plan and what didn't, from the perspectives of content, writing style and design/formatting. (This is the aided recall portion of the test.)

After you finish debriefing them on the current Plan, you can distribute last year's Plan and have the group critique the differences from year to year to see what they like better about either one.

Hope these ideas give you something to get started on. Feel free to email me directly with more explanation of the Plan and what it's intended to do so I can provide more specific recommendations for you.

Angela D. Sinickas

Top | Ask a Question | More Free Advice | Home

.

.

.

.

.

.

.

.

.

.

.

.

 

 

 

.
.

.

Printed Versus Online Publications

.

Q: Is there any research showing what happens to readership of publications when they migrate from print to strictly online?

Gwen Noel

.

A: Dear Gwen:

I haven't seen overall research results on this, just the results of specific projects I've done for clients, or what clients have told me about problems they're experiencing that led them to call me. Since this is a really long response, here are the headlines I'll cover:

  • What happens to readership and why.
  • Techniques to try to offset the drop in readership.
  • What happens to overall understanding of key messages.
  • What employees and executives say about going online.
  • My (opinionated) personal conclusions.

Effect on readership

When I conduct communication surveys, online publications have lower readership than print publications, although they typically receive higher scores for overall value to the readers and, of course, timeliness.

This is for a number of reasons, as I've learned from focus groups. Some reasons are mechanical and some, human:

  • Many people don't have computers available.
  • Many people with computers don't have reliable online access, especially outside North America.
  • Some people with computers and online access aren't given the TIME by their managers to check out the intranet. This is true when you rely on kiosks in a manufacturing environment, and it's even more of a problem for employees working in call center environments where productivity measures are very highly watched. The unfortunate outcome is that employees aren't given the time to learn answers to questions that would actually improve their productivity when talking to customers.
  • Many people don't have the time or don't remember to check the publication unless it arrives right in front of their noses.
  • Managers and others who are supposed to print out and post or otherwise share online information with "online have-nots" simply don't do it very often.

Techniques to improve readership

Overall, fewer people read at least part of the publication when it's available only online. The actual numbers will vary depending on HOW the online publication is used and marketed. Techniques to improve online readership that I've seen working very well:

  • Have the first sign-on screen employees use be, in essence, a home page for employee communications, with headlines of the day, etc., right in front of people first thing every morning.
  • Send an email to everyone the day the online publication becomes available, listing the headlines of the publication and possibly one- or two-sentence summaries. This works even better if your email system supports including links from the email directly to the intranet site for the publication.
  • Publish a printed publication just like the email described above. This has the advantage of also reaching people who don't have intranet access with at least the headlines and main point of each big news item. The disadvantage is the lack of an immediate link to the full publication.
  • Scrolling messages at the bottom of users' screens with big headlines of new news available online.

Impact on understanding

One company boldly went only online about three years ago for a lot of good management reasons. It's the way of the future, it saves money and trees, it's more timely, etc., etc., etc. Unfortunately, they did this knowing that by the nature of their work force, about half did not have access to online information. Over s three-year period, they noticed a slow drop in the overall level of employees' perceived understanding about company goals and programs.

Once the communication department broke the data down by job group, they found that the job families without online access had dropped in their understanding levels dramaticallyóabout 20% to 40%. The full impact had been obscured by increased levels in some other groups. The company had not changed how MUCH information they were providing to employees on this topic, only the delivery vehicles. Other channels, like face-to-face, had not successfully filled the vacuum created by the loss of the print channel.

Employee/executive comments

In two companies where virtually every employee uses a computer and (theoretically, at least) has online access, we heard really consistent comments. The executives were far more likely to say print should be abolished and replaced with only an online publication. (Although when asked about their own online practices, very few executives checked the intranet even as often as weekly. About a third had NEVER visited the site.)

The most interesting thing was employee reactions to having a publication available only online. About two-thirds said if they had to choose, they'd choose to have only print. In both companies, the main reasons were:

  • I'm staring at a screen all day. It's a relief to hold something in my hands.
  • It's easier to scan and skim in print without missing something that I really do want to read.
  • I typically read this type of information when I'm traveling, commuting, waiting in a client's office, etc.

At a third company we talked with sales people who don't typically come into a company office. They used company laptops all day long on the road visiting their clients. Many had very favorable things to say about the sales publication (in print) but hadn't seen it for a while. When we explained that it was only available online now, many weren't even aware of the change, which had taken place about six months earlier. They also said:

  • I'm having to access email and online forms, etc. from my home by modem. We only have one phone line and my wife/kids hate for me to tie up the line too long.
  • I can't access this information during the day when I'm a visiting client. At the end of a long day, I just want to download the information I must back to the company. The last thing I want to do is spend another half-hour online checking out the intranet or the online sales publication.

Conclusion

My personal conclusion is that print has a definite place in the mix of our communication channels. The position it should hold does depend on access issues for your own employee population. But even with universal access, it's too easy to kid ourselves that we're communicating just because we're posting things online. Very few might be seeing it.

I just heard of a consulting firm talking about how readership of their external newsletter has increased since it went online. They mentioned the overall number of people visiting the newsletter site and how long they spend reading it. As a former avid reader of the print piece when it came into my inbox, I find that very hard to believe. I've never sought their publication out, even though I always found useful and interesting information in it. I just don't remember to go there. I suspect the hits they're getting are from current clients who are already at their site doing other thingsónot the prospects they were trying to entice into becoming customers. Also, the "time spent online" they reported can easily be misinterpreted. The software tracking programs can't tell you if a reader is really reading for 25 minutes or talking on the phone or with a colleague while the publication is onscreenóunread.

I hope this provides some food for thought. I'd love to see other people's survey results or comments!

Angela D. Sinickas

Top | Ask a Question | More Free Advice | Home

.

.

.

.

.

.

.

.

.

.

.

.

 

 

 

.
.

.

Aligning Communication Strategy with Business Metrics

.

Q: There is often debate about communication and its value within an organization. How do you create a framework to more closely align communication strategy with business metrics and measure the impact in a manner that is convincing to executives, to the point where they'll understand how and why it can be built into the business model. We've made links to ESAT and CSAT, but it still doesn't seem to be enough. Your comments would be greatly appreciated.

Yves Fredette

.

A: Dear Yves:

It's nearly impossible to quantify the ROI of an entire communication program for an entire business strategy in a way that makes sense to executives. You'll need to pick one particular business initiative that supports one business strategy and:

  • Track your communication program's outcomes (increased levels of knowledge, positive changes in attitudes, increased access to communication channels, etc.)
  • Track the business outcomes of the initiative (increased sales, improved quality, reduced costs, improved safety, reduced turnover, increased productivity, etc.)
  • Track the specific behaviors and actions of the employees who are doing things differently that are helping achieve the strategy more effectively--things they are doing BECAUSE of the communication. (Pick behaviors that operational management is already tracking, like how long an employee spends with each customer on the phone)
  • Financially quantifie the value of a specific level of change in the behavior (each accident costs us $X, each percentage of quality improvement saves us $Y).
  • Draw a graphic that shows the increased knowledge and improved attitudes month by month against the changes in behaviors and the changes in the outcome. If the communication and operational outcomes are tracking with each other in the same direction, that presents fairly compelling evidence to management. If they want more, you might need to do some statistical analysis to compare various locations' results. This can be even more compelling if you conduct pilot groups with your suggested new approach to communication and do nothing differently at other locations. Then you can compare outcomes for the pilot and control groups.

To achieve all this, you first need to work with operational management to identify the right behaviors and then conduct some open-ended research (interviews/focus groups) with the employees. You'll need to discover why employees aren't doing what they should be to maximize the outcomes of the business initiative. Identify what knowledge they're lacking (how to do something, where to get certain resources, what, when or where specifically they're supposed to do something) and identify any attitudes that might be blocking their behaviors (my efforts won't make any difference, there's no reward for me to change what I do, no one cares anyway, if I do this my customers might not like it, my boss will yell at me if I do this, etc.). What you learn should form the basis of your communication approach.

Hope this helps!

Angela D. Sinickas

Top | Ask a Question | More Free Advice | Home

.

.

.

.

.

.

.

.

.

.

.

.

 

 

 

.
.

.

Setting Measurable Objectives

.

Q: I would like to know if there is a reliable measurement instrument that could be applied to determine the level of programme effectiveness when it comes to measuring communication programmes. Your response would be sincerely appreciated.

Werna du Preez

.

A: Dear Werna:

Unfortunately, I don't think there is just one instrument. Many people have developed different instruments. In addition, what makes you own program successful might be quite different from what makes another company's program successful in the eyes of its management. You will need to tailor a measurement instrument to your own needs. You might want to take a look at my response about communication audits to Clark Miller elsewhere in this list of questions for an overview of the many different approaches to measuring the effectiveness of a communication program to see if some of them might work well for you.

The beginning point should be setting objectives for your program, and making them measurable objectives. For example, one goal might be to make sure employees know about the company strategy. Possible measurable objectives could include any of these different ways of defining success for this objective:

  • Have 80% of employees be aware that we have a written strategy
  • Have 50% of our employees be able to identify our three strategies from a list of five possible ones.
  • Have 67% of employees know what percentage of market share we are trying to achieve in the year 2001.

You can also develop measurable objectives for communication channels, for example:

  • Ensure that 95% of employees receive our employee publication each month
  • Ensure that 67% of the employees who do receive the publication believe it provides information they either need or want to have.
  • Have at least 25% of employees who receive the publication say that reading something in the publication has affected the way they do their jobs.

When you set objectives, you first define what criteria have to be met to define "success." This should be developed together with your management to be sure that your definition of success matches theirs. When you make those objectives measurable, you begin to define the exact questions that need to be in your own individual measurement instrument that will help you quantify how successful you are.

Thanks for your question,

Angela D. Sinickas

Top | Ask a Question | More Free Advice | Home

.

.

.

.

.

.

.

.

.

.

.

.

 

 

 

.
.

.

Tips on Content and Wording of Surveys

.

Q: I am the Communications Manager for a consulting company, and our employees are dispersed throughout the country at various client sites. Each year we distribute a communications program survey to all employees at our annual meeting as a way of measuring the effectiveness of our internal communications programs. My question is two-fold:

1. What types of questions should I include in this survey to really measure value?
2. And how can I word my questions to elicit the right responses?

I truly appreciate your expert advice!

Mary Yanocha, Communications Manager
PM Solutions

.

A: Dear Mary:

First, here are some general idea starters for a survey. Of course, they need to be tailored to your own specific communication program, your executives' expectations and your employees' needs:

  • Levels of interest and understanding about key messages
  • Current and preferred sources for each message topic
  • Access to various communication channels
  • Overall value of each channel
  • Ideal frequency of each channel
  • Current frequency of various face-to-face meetings
  • Effectiveness of communication skills for supervisors/managers and executives
  • Other more broad questions about information credibility, accuracy, timeliness, volume, etc.
  • Some highly focused "readership" type questions about a key channel or two.

As far as tips for wording your questions, whole books have been written about that! Here are some of the important keys:

  • If you will want to compare your results with those of other companies, you'll need to use the exact wording of questions in the pool of questions from the database.
  • Make sure the wording will result in specific actions you can take.
  • After you draft your initial questions, pretend first that you received a highly favorable response, and then a highly negative one. Do you know enough about what actions to take to turn a negative response into a favorable one? For example, let's say you ask people to agree/disagree with the following statement: "The employee newsletter should continue to be published once a month." If they disagree, you don't know whether to increase or decrease the frequency. It would be better to ask people to select their ideal frequency from a list you provide (weekly, monthly, quarterly, etc.).
  • Avoid words that can be interpreted differently. Obvious ones include words like bimonthly, which can mean twice a month or every other month. Other typical words or phrases that need to be defined when you use them include "senior management," "your location" and any jargon or abbreviations.
  • Be sure questions ask about only a single item. For example, don't ask if people think communication is open and honest in one question. It can be one but not the other, so people won't know how to respond and you won't know which problem to fix.
  • Avoid built-in assumptions in your questions.
  • If you use an agree/disagree format, use middle-ground adjectives in the question; for example "good" instead of "excellent" or "horrible." That gives respondents more leeway in agreeing or disagreeing somewhat or strongly, so you'll receive a more accurate reading of the range of perceptions.
  • Phrase questions in a way that prevents people without a legitimate, informed opinion from answering. For example, if you ask if communication has improved, worsened or stayed the same during the last 12 months, you need to include an option that says: "I haven't been here 12 months." Otherwise, people who have been hired recently would probably choose "stayed and same" and dilute the true results from those who have been here during the entire time period.
  • Obviously, use clear and simple phrasing. Do a reading grade level check on the survey and try to keep it between grades 8-10 (US system, which means 8 to 10 years of formal education required to understand the writing).

Finally, pretest your survey with a random selection of the types of respondents you're likely to have. Ask them for which questions are difficult to understand or to answer, which questions are missing a response category they'd like to have available, etc.

Hope this helps!

Angela D. Sinickas

Top | Ask a Question | More Free Advice | Home

.

.

.

.

.

.

.

.

.

.

.

.

 

 

 

.
.

.

Response Rates and Random Sampling

.

Q: We are conducting a communication audit. What is the right number of surveys to send out and what percentage of the surveys we send out should we expect to be returned?

Alice McCormack

.

A: Dear Alice:

The answer to your question about how many surveys to send out is very complex. It sounds as if you're considering sending out surveys to a sample of employees rather than all of them.

First of all, the number of surveys to send out depends on how many employees you have, which you don't mention. If you have a relatively small number of employees, you might need to send out surveys to everyone. If you have over several thousand employees in total, you would need only 500-600 completed surveys to have fairly reliable results for your population AS A WHOLE, assuming the responders accurately reflect the demographics of the entire group.

However, most companies also want to be able to compare various organizational subgroups against each other (locations, business units, etc.). This typically requires a much larger number of responses so that you have a sufficient proportion of each subgroup participating. Also, smaller subgroups may need a larger proportion of the group responding for statistical reliability than larger subgroups.

Determining the number of people to send surveys to is something that really needs to be determined by a statistician who is provided with information about the size of your employee group and subgroups. If you don't do this carefully, some executive with some statistics background will invalidate all the results of your survey when you're done.

The second part of your question, though, relates to those 500-600 COMPLETED surveys I mentioned. If you typically get a 50% response rate on surveys, then you would send out 1,000. However, a great many factors will affect your response rate.

The response rate on communication surveys I've done for clients varies from 20% to over 80%. A lot of it has to do with:

  • The length of the survey. The longer it is, the lower the response rate.
  • Demographics questions. If there are too many of them, or if they are on the first page of the survey, the response rate plummets.
  • Previous experience. If a company has administered many surveys and never reported back results or made changes based on the surveys, the response rate will go down with each new survey.
  • Management support. If senior management lets middle management know that they really want to see the results and want to see good participation in all units, managers make sure employees are given some time to complete it. Otherwise, they give people grief about "not working" while they're completing surveys.
  • Incentives. If there is a reward for the location or the department with the best response rate, or if every location with at least a minimum return rate receives a reward, that makes the biggest difference. Then peer pressure gets results. For example, getting an extra day off around a three-day weekend for each "winning" location.
  • How and where it is administered. Paper surveys sent to the home will have a lower response rate than those distributed individually at work. (However, you need to be aware that some employee groups don't have the physical environment at work that provides so much as a writing surface.) Of course, group administration in small meetings will get the best rate. Electronic surveys (Web, email or phone) tend to have responses come in more quickly. Most responders do it right away; with paper, many delay until closer to the deadline. However, with some electronic administration methods, people are more likely to feel that they could be identified individually. If you're asking communication questions of a sensitive nature, such as about supervisors' communication skills, you might get a lower response rate electronically than on paper, which is perceived as providing more anonymity.

I'm sorry for such a long answer, but this is a very complicated issue!

Angela D. Sinickas

Top | Ask a Question | More Free Advice | Home

.

.

.

.

.

.

.

.

.

.

.

.

 

 

 

.
 

.

What's a Good Response Rate for an Intranet "Spot Poll"?

.

Q: I've spoken with you on the phone before and your advice helped me immensely. I have a question about employee polls. We have a multiple-choice question from our President and CEO posted weekly on our Intranet site. We are wondering what type of response rate would be considered favorable. We started the feature in March of 2001 and have averaged about 300 responses from a total of 17,000 employees. This week, we are inching close to 800 responses. Over time, we have had more than 2,000 unique votes, more than 2,000 employees answering the questions at least once. This is more than 10 percent of our employees. I had thought that even a 1 percent employee response would be good. Is this correct?

Thank you for your time,

Nicole Townsend, Raley's

 

A: One major question I have is what percentage of your employee workforce has a computer on their desks? That's the number I would use as the denominator in calculating your percentage response rate. Those who don't have daily access are not at all likely to even see the poll, let alone decide to answer. The other factor I'd look at is how many of your employees (unique users) actually visit the site in the same time period as a poll. That would truly be your response rate from the employees who "received" a survey. Hope this helps,

-Angela D. Sinickas

 

Q: Thank you for your quick response. I gathered the numbers you requested. Being a large grocery chain, our computers are dispersed somewhat differently from most corporations. None of our stores has the same number of computers; however; I think it is safe to say that each of our 149 stores has one computer (at least accessible to the manager). Others have more than one, often a second in the break room, which the employees can access on breaks. For data purposes though, I would say there is one per store.

Adding in our Corporate Office and satellite offices, I would estimate that about 785 of our 17,000 employees have "a computer on their desk." Regarding your question about how many people see the survey, this is also a bit tricky. We can safely say that a majority of the 785 employees with computers see our intranet because it comes up as the browser unless someone has changed their browser. (I don't think this happens too often.) However, we do not have a way to track the number of store employees who have access in their break rooms to the intranet.

 

A: It looks like you're getting a really strong response rate from those who are likely to be aware of the survey, anywhere from 40% on up. The only important thing to reinforce with your leadership team is that the results may reflect the views of professionals and managers for the most part, and not your average store employee.

Angela D. Sinickas

Q: Thank you so much Angela. Actually, what is interesting is that store employees are the ones responding at a higher rate. Even though they do not have computers themselves, they do use the computers in break rooms. What we've found, since we use employee numbers to categorize the voters' classifications, is that store employees vote more often than those employees who are managers and professionals with the computers on their desks. Amazing!

Top | Ask a Question | More Free Advice | Home

.

.

.

.

.

.

.

.

.

.

.

.

 

 

 

.
.

.

When (Not) to Survey and the Role of Third Parties

.

Q: I am a survivor of a merger of four regional airlines in Canada. As the sole body in charge of employee communications for an organization with 6,000 employees based in dozens of communities from coast to coast, I'm feeling just a tad overwhelmed. I have been tasked with a number of challenges, not the least of which is conducting an internal communications audit.

My question is this... is there ever a wrong time to conduct a communications audit? I mean the organization structure is yet to be finalized and employees are leaving the company or moving into new areas on a daily basis. Results are bound to be skewed by a significant number of people who will not be around in two months. As well, the prevailing gloom and doom brought on by months of uncertainty and stress is likely to abate somewhat as the dust begins to settle.

Another big question... by nature, should not an audit be performed by an impartial third party instead of the very people (person) who produce(s) internal communications? Any advice or hints on how to begin, where to look for these answers, and audit resources would be much appreciated.

Flying without a copilot

.

A: Dear Flying,

First, let me say I really empathize with you. I've been in similar situations, and it's rarely very satisfying, but it does provide for lots of good experiences to trade on later! Yes, many parts of a communication audit should be done by an outsider for maximum objectivity and candoróexecutive interviews, employee focus groups etc. Sometimes the third party also brings expertise in things like developing questionnaires and interpreting survey results where the issue isn't so much objectivity as it is experience in knowing what to ask and how to analyze what's meaningful and what is just so much noise.

On the other hand, there are a lot of measures that you CAN conduct yourself (assuming you have the spare time!) because the measures are objective by definition. For example, you could conduct a content analysis to see what types of information you have been covering in a publication or your intranet site and how well that compares with your organization's objectives, geographies, business units, etc. You could also install software on a web site that measures many aspects of user usage.

As far as the right time to measure, it depends on what you're measuring. If you want to find out how satisfied employees are overall with communication, they're probably at a low point now. That means it's the perfect time to measure if you want to establish a low baseline now to show how much improvement you make in the future. It's the wrong time if you want to show why you should get a big pay increase right now.

However, most communication audits identify not just satisfaction, but actual patterns of current and preferred communication. That information could be invaluable right now to make sure everyone is getting what they need, the way they need it, as soon as possible to help the company get profitable again quickly. Also, I don't know about Canadian tax laws, but in the US, many types of research related to a merger are tax deductible under very favorable conditions if they are conducted within a short time of the merger.

For more information on where to start, you might want to look at some of the other Qs & As in this section on communication audits. Best of luck, and may you have a strong tail wind!

Angela D. Sinickas

Top | Ask a Question | More Free Advice | Home

.

.

.

.

.

.

.

.

.

.

.

.

 

 

.
.

.

Dealing with "Over-Surveying"

.

Q: What words of wisdom can you pass on about when you're working for a company that is reluctant to survey: "Oh, staff have been over-surveyed; we don't want to bother them." Unless we set goals and measure, we can never determine if all the effort has been worthwhile. Help!

Many thanks for your assistance.

Cathy

.

A: Dear Cathy,

I think that management often believes employees are over surveyed before employees do. Employees feel over surveyed in some of the following situations:

  • The surveys are developed by individuals with no survey design background and include questions that are difficult to understand or respond to, or seem to have no relevance for improving employees' own work experience.
  • When employees never hear the results of past surveys.
  • When employees see changes in the company, and there is no reference made to the relationship of the changes to past employee surveys.
  • If it is a relatively small group (under 1,000) so that every survey is sent to every employee every time, rather than sending different surveys to different randomly selected groups of employees.

Techniques that help get around this "survey fatigue":

  • Coordinate all employee or customer surveys through a clearinghouse so that the timing of surveys doesn't overlap and you don't ask questions for which answers are already available.
  • In the introduction to a new survey, begin with key findings and changes made based on a previous survey or other form of employee research.
  • Send the survey only to a sample of employees (although you'll need a statistician's help in selecting a sample of the right size). If you know several surveys will be administered about the same time, pick mutually exclusive samples at the same time so that no one person receives more than one survey during that time period.
  • Literally connect changes the company is making with employee or customer survey results when you announce the changes.
  • Consider doing very short "stealth surveys" for which you don't obtain advance permission. People might not even know they've been surveyed. For example, obtain a list of 400 to 600 randomly selected names and divide the list among 10-15 of your colleagues at work (or an intern). Then have your deputized research team call these employees on the phone and tell them who they are and that they're wondering about what people think about a couple of topics you're planning to communicate about. As you all have this conversation with employees, you would actually be recording their answers on a survey form. Or, somewhat less scientifically, you could stand near lines in the cafeteria or the credit union (or in a check-out line for customer research) and ask people a few questions while they're waiting in line. It's not statistically defensible, but it will certainly give you a good directional reading of where a wide range of different types of people stand on a topic.

Hope this answers your questions,

Angela D. Sinickas

Top | Ask a Question | More Free Advice | Home
 

 

© 2010 Sinickas Communications, Inc., All Rights Reserved. This Website, and all its content, is the exclusive property of Sinickas Communications, Inc., and is protected by US and international copyright laws. You may not reproduce, distribute, transmit, incorporate into any publication, product, Website or computer network, or use the content in any other way whatsoever without the express written permission of Sinickas Communications, Inc.