The Ultimate Survey Guide
Use this survey guide to learn how to deliver a successful survey project, design surveys, get responses, analyse results, present results and plan action.
Use this survey guide to learn how to deliver a successful survey project, design surveys, get responses, analyse results, present results and plan action.
There is a clear link between feedback and success. Feedback is central to everything we do.
Based on our research, we know that not enough individuals, teams or organizations are getting the feedback they really need. Often, leaders just don’t think about it. Or they don’t make it a priority.
Most people like the idea of getting feedback. However, many things get in the way, including lack of time, expertise, tools and resources, along with other priorities and budget constraints. And, sometimes, people avoid it because they fear what people might have to say.
Leaders often make decisions and act without seeking feedback and ideas from those who have valid contributions that should be considered. This can have several consequences. People with important ideas may not feel valued. And, without their contributions, poor decisions may be taken. People are then unlikely to be aligned and committed to the decisions.
There are unlimited opportunities for leaders, teams and organizations to get feedback. When asked, most leaders mention customer surveys and employee surveys. But they often don’t think about many other valuable purposes.
Our advice is to take a disciplined approach to feedback and to consider it an essential tool to engage and align teams.
You need a structured process to ensure success with your survey project.
Goals and objectives are essential to guide survey design. Resist the urge to dive head-first into writing questions.
Start at the high level before getting to the detailed survey design. Begin the design process by establishing the goals and objectives.
Once you have specific and measurable objectives, you can start to think about writing questions. The questions need to achieve the objectives. In order to develop the objectives it’s important to make absolutely sure that you are confident about the subject at hand.
There are a couple of ways you can do this. One is to consult with experts and another is to do some research before you start writing.
Research the topic before writing questions about it. This will help you to craft valid and relevant questions for your audience. It can help to include questions you may not have considered.
Research presents an opportunity to compare the facts and figures you get from different resources. When facts vary, it is an indication that more research is needed.
Depending on the type of survey you are designing, it can be helpful to talk to a subject matter expert or involve a Steering Committee before you do your own research.
Field experts understand the subtlety in a topic area and the things that can look important but are ultimately distracting. Talking to someone in the field can save time and help focus research into the right areas.
A benefit of talking to a subject-matter expert or a Steering Committee is that they are likely to know the pitfalls with certain types of questions and what to expect. They can help you to shape your objectives into relevant questions for the survey.
Even though you and your team may be capable of creating thought-provoking questions, it can be beneficial to talk to an experienced researcher or consultant. Someone with strong survey-building experience can help you clarify the type of information you are seeking and avoid pitfalls. Bringing in a third party adds an objective pair of eyes to the process. They will help make the objectives more specific and identify where more information may be needed.
Clear polices are important to guide those who manage the survey project and protect the respondents.
Policies may be necessary to guide internal actions as well as to give comfort to the respondents. Most important is that the policies are clearly communicated to the relevant people in the project. Such communication may need to be explicit, or it may be implicit, depending upon the nature of the policy and its target audience.
It is imperative to ensure that the survey policies are communicated and adhered to by those people who have access to the survey project results. Anonymity policies that are communicated to survey respondents give them comfort about who will be able to see their responses. It is imperative to ensure that the survey policies are followed.
Start at the high level before getting to the detailed survey design. Begin the design process with clear outcomes in mind.
The answers to the questions below will help you start thinking about the best way to get the information that you need.
Headings and instructions provide a sense of structure and make the survey easier for respondents to follow when they complete the survey. They can also make it easier for those who analyze and interpret the data.
Page breaks can serve several purposes. They are a way to ensure that survey responses are saved if a person is interrupted when completing the survey. They can also be used to focus attention on a single question at a time. And, they can be used to fit with the structure and headings or instructions in a survey. For example, they allow headings and instructions to be placed at the top of a page and can be varied from one page to the next.
When you have multiple free text questions, we recommend using page breaks more often to avoid an issue when someone writes a large amount and then loses their work for varying reasons.
Be careful not to use too many page breaks. On longer surveys they will be very frustrating for the respondents, particularly if they are using a mobile device.
After considering the bigger picture goals and objectives, start designing the survey itself. The survey questions you create must be designed to achieve your survey goals and objectives. They shouldn’t ask anything that is irrelevant. Here are some guidelines for writing good questions:
These guidelines will help you to stick to the point, and get only the data that supports your goals and objectives.
There are different question types you can use. Using the right type is important to get valid data.
Quantitative questions are directly measurable. This means that you set up a list of answers and your respondents will choose from those possible answers. These questions will give you clean reports, easy-to-analyse charts, and will help you identify patterns and trends.
Qualitative questions let respondents answer in their own words. Even though they can be more difficult and time consuming to analyse, qualitative questions provide deeper insight into how your respondents are thinking.
To get the best results, use a combination of quantitative and qualitative survey questions. If asking qualitative questions, don’t ask them up-front. Get buy-in from your respondents early with easy quantitative questions and leave the free text questions to later.
It is a common mistake to design and conduct surveys without considering the reporting needs. Consequently, reports can then be difficult to pull together. Avoid this by thinking about your reporting during the design stage. A benefit of this is being able to include custom values that you will see in your reports, but which are hidden to the survey respondents. You might need to do this when reporting specific values (which don’t mean anything to your respondents), or you have to analyse data in a particular way.
When you set your reporting values, go back to the purpose of your survey. Think about the values that are going to allow you to achieve the goals and objectives, and that will fit the purpose. That way, you will design questions that give you effective and useful reports.
Pay careful attention to all elements of the design to avoid mistakes, improve response rates, ensure quality data, and help produce valuable reports.
There are pros and cons to each type of survey question.
Open ended (or free-text) questions give respondents the opportunity and freedom to respond in words. Use these questions carefully and sparingly. They usually take respondents the most time to answer. They can be time consuming and difficult to analyze. Using too many open ended questions is likely to result in the respondents experiencing survey fatigue. When using open-ended questions, make sure that you provide regular page breaks and the opportunity for the respondent to save their responses.
Closed questions ask respondents to choose from a specified set of possible answers. This type of question includes rating scales, demographic questions, multiple choice and forced choice. Use closed questions to get quantified results. Closed questions allow easier comparison and analysis of results. They are also easier to present and discuss. Closed questions are typically quick and relatively simple for respondents to answer.
Multiple Choice with single answers limit the respondents to selecting one response. Among other purposes, these can be useful for asking demographic questions.
Be careful to consider all options. Respondents can be frustrated if the response they want to give is not provided.
Multiple choice with multiple answers allow respondents to select multiple responses. The minimum and maximum number of required responses can be set. For example, you may allow respondents to select all responses, or you may require them to select a specific number or a range (like between 2 and 3). Again, remember to consider all options to avoid respondent frustration.
Rating Scales provide a common set of scale options for respondents to answer survey questions. Using the same rating scale allows comparison of responses across multiple questions in the survey. Rating scales help to discover varying degrees of opinion. They are typically quick and easy for respondents to answer.
The underlying principles for developing a rating scale are:
There are two key questions about rating scales.
In most situations, five or seven point scales work best and for reliability. Typically, we prefer five point scales unless questions specifically require greater differentiation. Five point scales provide valid data and are easier for respondents to complete.
There are some exceptions, as in the case of a Net Promoter Score survey which has a scale of zero (Not at all likely) to 10 (Extremely likely) where the points in between are not labeled.
Our recommendation is to label each response option with words that clearly define what each point means. Words are better than just numbers for several reasons. For example, if you provide a range of 1 to 5 without labels, then what does each number mean to the respondent. Additionally, labelling the first and last number in a scale, without labelling the numbers in between, still creates the issue of what the middle numbers mean.
The Net Promoter Score (NPS) is a common approach to gather customer feedback and establish a high level performance benchmark. NPS uses an 11 point scale from Ten (Extremely Likely) to Zero (Not at all Likely) and a single question about how likely a person is to recommend.
We also recommend avoiding overly long or complex response options. The objective is to allow respondents to answer in a way that differentiates without providing too many points where the scale becomes difficult to answer or overly complex.
Consider using a Balanced Scale which gives respondents an equal number of response options around a midpoint. Balanced scales allow respondents to select a neutral response rather than forcing responses that do not match how they feel. An example would be:
We recommend providing a scale option for those who may not be able to answer. Examples include “Don’t know” or “Not applicable” or “Prefer not to answer”. Perhaps the respondent has had no experience with the specific question being asked. Don’t force them into making a choice without relevant information.
Use Yes/No questions sparingly and only when you need an absolute response or are qualifying respondents.
Our recommendation is to make questions optional in most cases. That way you avoid forcing people to make responses and answer questions that they really don’t want to. In such cases, they may not provide valid responses.
Design the survey questions to achieve the objectives. Avoid anything that is irrelevant.
Here are some guidelines for writing good questions that supports the goals and objectives and get valid data.
Quantitative questions are directly measurable with a list of answers for respondents to choose from. They give clean reports, easy-to-analyse charts, and help to identify patterns and trends.
Qualitative questions let respondents tell you the answer in their own words. Even though they can be more difficult to analyse, qualitative questions will show exactly how respondents are thinking.
To get the best results, use a combination of quantitative and qualitative survey questions. If you ask qualitative questions, don’t ask them up-front. Get buy-in from your respondents early with easy quantitative questions and leave the free text questions to later.
Conditional Logic lets you create dynamic surveys that change what a survey respondent sees and what happens based on their responses.
Logic can be set to hide questions or instructions, finish a survey early, redirect respondents to a URL on completion and set or append messages to the finish text depending on the respondent’s answers.
A survey that uses conditional logic requires careful and meticulous planning, execution, testing and piloting in order to avoid errors and ensure the survey objectives are met. Where possible, keep it as simple as possible.
Conditional Logic is typically structured with three components: Actions, Triggers and Conditions.
Actions are what you want to happen when responses meet defined Conditions and Triggers. Actions include:
Triggers are defined responses to survey questions. Each trigger has answer options to check against. These options vary depending upon the question type. Triggers include:
Conditions contain one or multiple Triggers that, if met, will result in the Action. Conditions can be set to meet “any” or “all” triggers.
Integrate the survey with third party applications to automate processes and create value.
Did you know that you can connect Spark Chart with 1000+ apps and automate survey tasks and workflows using Zapier? Learn more at https://zapier.com/apps/sparkchart/integrations.
Zapier is a web application that allows you to create do-it-yourself automated tasks and workflows (Zaps) between apps without the need to write any code. With Zapier Triggers and Actions, you can automate many workflows between Spark Chart and other applications. Zapier calls these automations “Zaps”.
To learn more about Zapier, please see their help at Zapier Help Basics.
Here are some examples.
Pilot the survey pilot to validate the questions and the effectiveness of the messages.
The pilot should aim to cover, and get feedback about, the entire survey process, not just the survey questions. That includes the survey invitation, the instructions, questions, response options, messages, emails, reminder messages and the thank you messages.
Piloting should identify questions, messages or instructions that don’t make sense, or will lead to biased responses. It can also be useful to identify areas that may have been missed when designing the survey.
At a minimum, first pilot the survey yourself. Then, consider piloting with others (e.g. members of the survey project team). Beyond that, identify people from the target audience and engage them in the pilot. The number and nature of people to involve in the pilot will depend upon a range of factors, including the goals and objectives of the survey, the size of the target audience, the diversity of the target audience, and the availability of pilot participants.
In a Participatory Pilot the participants are aware that they are participating in a pilot. The pilot participants are clearly informed that they are participating in a survey pilot. This type of pilot is useful when feedback is needed from the target audience about the survey content, processes and messages.
In an Undeclared Pilot, the participants are not aware that they are part of a pilot group. The survey is issued as if the survey were real. This type of pilot is useful to ensure there are no issues with survey completion or to review the results received.
What you communicate to the pilot group is likely to depend upon whether the pilot group are aware that they are participating in a pilot or not.
In a Participatory pilot, engage pilot participants in such a way that makes them want to be involved and to help. The pilot participants need to be aware of what is expected in the pilot process and how their pilot data will be used. More than likely, the survey will be refined after the pilot, so the responses from the pilot group will need to be deleted. The pilot participants need to be aware of this and that they will be asked to complete the survey again when it goes live.
These pilot participants need clear guidelines about 1) what they are to look for and 2) how they will provide feedback about the survey. E.g. can they provide their feedback in the live pilot survey? Do they have to take notes and send them separately? Is an online collaboration tool being used to collate pilot feedback? When are they expected to provide feedback?
In an Undeclared pilot, conduct a survey with a small live group of target respondents without them being aware that they are participating in a pilot.
After the pilot, review the feedback and refine the survey, instructions, key messages and the communications. Depending upon the pilot, it may be necessary to conduct a follow up pilot, particularly if major issues are identified.
Once satisfied with the survey, it is important to get sign-off by the project key stakeholder.
Avoid these common survey mistakes.
Don’t dive head-first into question writing without first establishing clear goals and objectives. They are essential to guide the survey process and ensure the survey results in valuable data.
In order to engage respondents and make them want to complete the survey, ensure that the survey is properly introduced. Creating a clear context for the survey is very important.
Don’t add questions that you don’t need to ask. Don’t throw in a bit extra to the survey, just because you can. Stick with questions that support your survey goals and objectives.
Clearly defined and communicated survey polices are essential in a survey project. Policies guide the actions and decisions of those who manage the project and protect the respondents. Policies should consider things like anonymity, how the data will be used and who will have access to the data.
Always conduct a survey pilot. A pilot is an opportunity to get feedback on the messages communicated, instructions, survey questions and the scales used, along with the survey structure and usability on different devices. Based on the pilot feedback, refine the survey and the communications to address any issues identified.
Wherever possible, don’t send the survey to a large group of respondents without first sending it to a smaller group. Confirm that everything is fine with the smaller group before sending to the large group.
If there is an answer that you’re hoping to get, you can bias your questions to get it. Leading questions is the easiest way to do this. An example would be, “How happy are you with our fantastic service?” In that question, you tell the respondent that the service is fantastic. A better approach would be to ask about the service and then use a rating scale that allows the respondent to assess the service level.
Survey fatigue is common. Ways to do this include:
Piloting the survey will give you useful information as to whether the survey is tiring.
When respondents don’t understand survey questions, it can make the survey data worthless. Don’t use complicated or ambiguous language that is full of jargon.
Don’t leave room for ambiguity and don’t rely on prior knowledge. Also, don’t ask respondents to remember things from some time ago.
As much as possible, make sure the questions contain mutually exclusive ideas. This is not always possible and needs to be balanced against the survey length.
Choose open ended questions wisely and don’t over-use them. Open-ended questions are time consuming. They involve much work analyzing the responses. So, be prudent in asking them.
Pay attention to the detail in the survey. Poor grammar and spelling errors are unprofessional and send a message about the importance of the survey. Minute details in grammar can also affect the validity of the data.
Make sure that the response options cover the needs of the target group of respondents. It can be very frustrating to be forced to select and not have a relevant choice.
Using many different scales has two draw backs. First, it becomes difficult and time consuming for respondents to answer the survey. Second, interpreting the results becomes difficult. A standard rating scale allows easier comparison of responses across questions.
In multiple response questions, don’t have too many response options. Choosing answers from long lists is difficult and can lead to respondents not taking the time to respond accurately. So, keep the response options reasonable.
Carefully consider the length of the survey, the type of audience and the device that the respondents are likely to be using when they complete the survey. Then design the structure to suit. For example, long surveys can be very frustrating if only one question is placed on each page. On the other hand, placing multiple open ended questions on one page creates the risk of respondents losing what they have entered if they are interrupted and they have not saved their responses.
Do not overuse the mandatory questions. Use them sparingly. You may be forcing people into a selection that they don’t want to make, thereby invalidating the data. Forcing responses can be very frustrating. And, if someone does not really want to answer a question then they may just choose any response so they can move on.
We prefer not to use long rating scales like a 10-point scale, except for Net Promoter Score. Where possible, each point in a rating scale should be labelled to give meaning (albeit subjective) to the scale points. A large scale can be frustrating for respondents. And, when there are no labels for options then we question the point. Five point scales and seven point scales are preferable. Not labelling rating scales is a very common mistake.
Consider the audience and the questions being asked. In many cases, you should include an option for the respondent if they do not have experience with the question being asked. It is often unreasonable to expect that every respondent will have relevant experience to respond to every question in the survey. So, consider including a Don’t Know or Not Applicable option in the rating scale.
Select survey deployment methods relevant to the audience and the survey objectives.
Methods for getting electronic survey responses include email, websites, social media, mobile devices and auto-responders. Paper surveys do have a place in some circumstances, however paper surveys are costly to distribute, difficult to follow up and even more costly to process.
Email is one primary way to send survey invitations and to follow up outstanding responses. Engaging survey messages are essential to make people want to help you. And targeted, friendly follow up reminder emails are important to maximise the survey response rate.
Inbox management and keeping a survey invitation at the top of inboxes (within reason), is a key objective. A proportion of people will respond quickly and complete the survey shortly after receiving the initial invitation. However, there is always a group of people who will require follow up for various reasons. The timing might not be suitable when the first invitation is received, they may have other priorities, they may put off the survey until later, they may miss the invitation, or they may not want to complete it at all.
The idea is to make the entire exercise as unsurprising as possible. When people know what’s going to happen, they feel safe and comfortable, and that makes your survey feel easy to complete.
The email invitation is just one in a series of communications that you will need to design. Typically, several reminders will be needed to maximise the response rate. Be friendly. Recognise that people have busy lives and demonstrate empathy. Be clear, and immediately action any opt-out requests.
Where practical or relevant offer to share the survey results to those who took it.
Make sure that the people who get your emails are aware of your survey or have agreed to receive emails or offers from you. Unexpected email communications are far more likely to result in spam reports, or deletion.
Autoresponders are used to store lists of people and send automated emails. They provide a way to deploy surveys automatically and for the survey invitation to be personalized.
Websites and Social Media provide important forums to conduct surveys and get feedback. Be aware that public networks are open and difficult to control who sees the survey and responds to it. However, it is increasingly possible (with private groups and demographic information available) to manage where and to whom the survey displays.
Engage participants and motivate them to complete the survey.
The target will depend on many factors, including the nature of the relationship between the respondent and the subject or organization that the survey is about.
How would you feel if you received the survey and its messages? Would you feel that it’s easy to complete? Simple to use? A good experience? Is it enjoyable, even? Survey invitations need to be inviting.
The fastest way to disengage people is to make things hard, boring, or irrelevant. When people have lots of irrelevant questions, they’re more likely to abandon the survey. The experience on every device is also critical. Test the survey on different platforms, screen sizes and devices to ensure a great experience (and not a bad one).
In general, there are two basic audiences for any type of survey: Internal and External.
An internal audience is typically made up of employees or people that belong to an organization or group. Surveys sent to internal audiences tend to have much higher response rates compared to those sent to external audiences while external surveys tend to have much less engagement.
Let’s say a company issues an internal survey to its workers to learn what their challenges are. In this instance, it’s easy to see how the employees may be eager to provide this feedback. It’s a chance for workers to tell management what can be improved. Also, they may benefit by helping the company.
Getting an external audience, like customers or suppliers, to respond to surveys is typically more difficult. Their motivation is likely to be lower than an internal audience. External audiences may not see the benefit in completing a survey. Even when targeting specific groups of customers (such as those who made a recent purchase) and offering a reward for their participation, you still might not receive strong response rates.
Here are some ways to help achieve your target response rate.
When people understand how their survey responses will be used, they are more likely to give their time to help. Communicating this is very important. Be clear about incentives that could be attached to a response.
Think about:
Make sure the survey aims are clear in the invitation, instructions and welcome. Keep participants updated with progress; share the collective results at the end; and keep in touch about how the survey is used.
Shorter surveys get more engagement. Respondents typically complete five closed questions per minute, or two open-ended questions per minute. Keeping surveys brief helps avoid “survey fatigue.”
Ideally, every survey will have a progress bar that lets the respondent know how close they are to finishing. But people are busy or get interrupted and can’t always finish a survey they’ve started. Don’t let them slip away! Send a few gentle reminders by email to let them know they’re almost done, and how much you value their feedback. Space out the reminders by a few days, send them at different times to re-engage them.
Go back to the purpose of your survey, and only ask questions that are directly relevant to the purpose. If gender isn’t relevant, don’t ask! If location isn’t relevant, don’t ask! It’s tempting to collect extra information just because you can. But recipients will appreciate you sticking to the point.
Good, well designed surveys and questions help both the respondents and those analysing data. Bad survey design will get you data; but the quality of that data becomes questionable.
A simple way to encourage people is to offer respondents the opportunity to see survey results. Or, if they are going to be made public, the opportunity to see them before everyone else.
Very often, concerns about survey data giving you a competitive advantage are assumed and unfounded. In fact, many companies – including the largest consulting companies in the world – retain advantage because they share the results of their surveys.
Incentives can motivate people to give feedback. The trick is to know what interests the audience.
If you’re issuing a survey to a “captive” audience like employees or to follow up on a transaction, or are just asking a few questions, you most likely don’t need an incentive.
But for surveys that are time-intensive, or the respondents have an emotional bond with your company, offering an incentive may be good practice.
When people believe their input will make a difference in the future of your product or service, they are more likely to take time to complete a survey.
Incentives don’t need to be of great monetary value. Just make sure the incentive is relevant to the audience and piques their interest enough to give their feedback. There are also likely to be other factors that can influence people to respond to a survey.
Some things to keep in mind include:
A simple way to encourage people is to offer respondents the opportunity to see survey results. Or, if they are going to be made public, the opportunity to see them before everyone else. Concerns losing a competitive advantage are often unfounded. Many companies – including the largest consulting companies in the world – create advantage because they share the results of their surveys.
Having a defined budget is fundamental to your success. Know your financial limits and stick to an incentive that stays within those parameters. The cost of providing an incentive can quickly add up, so carefully selecting the right one can protect both the budget and the quality of the responses.
It’s also critical that you keep your promise of your incentive, or you risk alienating your audience.
When deciding who qualifies for an incentive, there are several options. For example, reward everyone, enter everyone into a lottery, possibly make a donation (e.g. to a charity) based on the completion rates. This decision can have a significant financial impact. Surveys with a guaranteed incentive for every respondent will include a nominal gift, whereas lottery style incentives often have a much higher value.
If you aren’t sure whether your message is clear enough, test it on a few people before you send it.
Make sure that your incentive meets the needs (and desires) of the audience. Take the time to understand what interests and motivates the target group. Rewards do not necessarily need to be financial. A reward could include sharing the results.
One important thing to consider when choosing an incentive is ease of delivery. Online gift cards and coupons makes for easy delivery, and only require an email address or phone number. You can send them by email or SMS. Respondents get their gift immediately after completing the survey.
Decide when you will send the incentive to the respondent. While it seems logical that you should only send a reward after a survey has been completed, some studies suggest otherwise. Sometimes respondents feel obliged to complete the survey after receiving the incentive. Guilt can be a powerful thing!
If you’ve found the perfect survey incentive, there is a real chance that people will want to get it more than once! It also means that you might get people who aren’t truly interested in your product or service wanting the incentive too. So, consider strategies to disqualify people seeking to capitalise on the rewards.
If your target participants include professionals, then offering to donate to charity on their behalf can be very effective. It can also be extremely effective if your target audience has a shared vision, cause, or mission. For example, if you need to survey the parents of children in your primary school, offering to donate a certain amount to a children’s hospital might be a great incentive. This is because the cause (helping sick children) is something that resonates with every parent.
Reminding people about the survey is really important. Everyone is busy, and non-critical requests fall to the bottom of the list unless you send a reminder.
In fact, when you don’t send reminders, you are likely to get a low response rate! Keep in mind, though, that the reverse can be true: Too many reminders can be detrimental. So, carefully crafting your communications is essential. How many survey reminders you send will depend on your survey goals and your audience. For an employee survey, organisations usually expect their staff to respond so more reminders, and urgent reminders, will be warranted.
Robust data is crucial. A single data error raises questions about the whole survey report.
Wherever possible, avoid manual processes for data analysis. There is too much room for human error. Spark Chart survey reports are instantly generated by the system and eliminate any prospect of reporting error. There is no room for error. Manual calculation processes carry a real risk of error. The survey data must be correct.
The longer the reporting and analysis takes, the more likely the survey results will be ignored or excuses will be made that they are no longer relevant due to the time delay. So, when analyzing and sharing survey results, speed is critical.
In Spark Chart, reports are generated instantly and can be easily customized and filtered immediately. Reports can be saved, copied, printed and exported.
Filter results to develop insights and identify trends. You typically will need to filter according to selected respondents or respondent groups, dates and specific responses to questions.
Spark Chart makes analysis of results easy. Even still, analysis requires a proper investment of time to review the results, identify trends, and develop recommendations and key messages.
Messages and recommendations can be added to Reports and to Report Shares in Spark Chart. Reports can be created and easily presented or shared with different audiences.
In many survey projects, it is likely that the results will need to be shared with different stakeholder, from Boards, through leadership teams, to employees, customers, suppliers and stakeholders. Consequently, it is important to have the flexibility to tailor or customise reports for different audiences. What is shared with the Managing Director might need to be quite different to what is shared with all employees.
In Spark Chart, customized reports can be created and easily shared with different reports with different audiences. Shared reports may be open to the public or they may kept confidential.
Reports need to grab the attention of the audience and focus on the most important findings. So, think carefully about choosing the right presentation format, including graph type and colours plus adding messages. Sometimes, what you hide is just as important as what you show.
Editing a report layout in Spark Chart is easy. Later, when sharing the report, only the customised presentation is visible. In addition to filtering, options include selecting from a comprehensive range of graph types, choosing chart colours, ordering the responses, hiding or showing specific questions and responses, hiding or showing elements like additional comments, response rates, themes and more. Key messages and recommendations can also be added.
Use feedback to engage and motivate your team (and your customers).
Spark Chart has features to share reports easily and instantly and eliminates the need to export results into other applications. This is a great way to present the results.
Report Shares can be customised for different audiences. A title, welcome text and key messages can be added to the Report Share and are visible when viewing on the web. And, reports can be broken down into sub reports.
When a Report Share is created a web link is generated. Report Share links can be made public or they can be protected with a password or PIN.