Writing Powerful Performance Indicators w/ Lori Foltz Fabian
Are you and your grant funders speaking the same language? Inaccurate, unspecific and achievable performance indicators can spell disaster for a non profit. Instead, write POWERFUL performance indicators to ensure smooth programming and happy, long-term funder relationships.
In this 1 hour special workshop hosted by Instrumentl, you'll learn how to set your nonprofit up for success with funders by designing accurate and achievable performance indicators.
By the end of this one-hour workshop, you’ll learn:
- Definitions around performance indicators and targets
- How to write SMART indicators with templates for various types
- Hazards to avoid in writing performance indicators; and big picture considerations within the context of the grant proposal
Create your Instrumentl account using the link above. Save $50 off your first month should you decide to upgrade when your trial expires with the code FABIAN50.
Lori Fabian, President, Fabian Consulting, Inc. Lori has over thirty years of experience working with non- profit organizations nation-wide, with particular focus in the mid-Atlantic states.
As a professional grants manager, writer, and consultant, Ms. Fabian has helped nonprofit organizations raise over $75 million in foundation, corporate, and government support. She has also provided grants research, training, and counsel to hundreds of agencies. A specialist in outcome measurement and program evaluation since 1997, she has spearheaded external evaluation efforts for a wide variety of federal, state, and major foundation-funded programs and projects in Louisiana, New Jersey, and New York, including a county-wide assessment of food insecurity in Passaic County and a national evaluation of a long-standing program for the Foundation Center.
A frequent provider of training and technical assistance in both grantsmanship and program evaluation, Ms. Fabian was the inaugural recipient of the New Jersey/New York Girl Scout Training Cluster’s annual Compass Award for Excellence and Leadership in Learning Facilitation.
Don’t delay, save your spot in this grant workshop today.
Instrumentl Partner Webinars are collaborations between Instrumentl and its community partners to provide free educational workshops for grant professionals. Our goal is to tackle a problem grant professionals often have to solve, while also sharing different ways Instrumentl’s platform can help grant writers win more grants. Click here to save a seat in our next workshop.
Click the video link below to start watching the replay of this free grant workshop, or check out the transcriptions below the video.
Instrumentl Partner Workshop Replay
Instrumentl Partner Workshop Slides
Click to find the best grants for your nonprofit from 12,000+ active opportunities.
Search 150+ subcategoriesExplore More Grants
Writing Powerful Performance Indicators - Grant Training Transcription
Celia: All right. So hello, everyone. Welcome to Writing Powerful Performance Indicators with Lori Foltz Fabian.
This workshop is being recorded and slides will be shared with you afterwards. So, definitely keep an eye out for an email or two from me with some follow-up stuff there. Remember throughout, if you have any questions, you are always welcome to drop them in the chat. I just asked you put three hashtags in front of them. If this is a--Lori's got some really good questions to ask. So, there'll be some discussion in the chat. So, those three hashtags will just make sure that I don't miss your question. So, make sure that you do that as well.
In case this is your first time here, this is a free grant workshop by Instrumentl and our partner, Lori. And so, these are collaborations between Instrumentl and our community partners to provide some free educational workshops for grant professionals and nonprofits. Our goal with these workshops is very much in line with our goal as a company and Instrumentl. And that is, really, to support grant writers and nonprofits in order to get the tools, insights, and expertise that they need to find more funding with less work. Right? So, just speeding up all of this.
And so, I'm just going to take a minute to explain kind of what that means. Right? So, we love efficiency. I would say we're an efficiency obsessed team. And I think that this is an area we can always continue to improve. So Instrumentl, at a high level, we're saving people time every week on their brands process. And we're increasing grant output pretty substantially within the first year of using it by 78%. So, How do we do that? Just a couple of slides here to kind of give you an idea of what this looks like.
Instrumentl is a prospecting tracking and management platform for all things grants. That means everything you're doing when it comes to grants, sits underneath one little umbrella and it's all in the same place. We have over 12,000 active grant matches on the platform. So if you see it, it is available to be applied for. We're also going to send you weekly updates on deadlines and new opportunities. So, it's kind of like a personal assistant that sits in the background and just works for you 24/7.
Of course, once you find opportunities, the prioritization and the evaluation process becomes the next hurdle to overcome. Right? And it can be a pretty substantial time suck. So, what we do is for the people who rely on digging through 990s, which is a fantastic practice, right? It gives us such a better idea of who the funder is, we're instead using some visualization tools to just simplify all of that information. So, for example, what you're looking at here are a couple examples of this. The top left, we're showing the median giving for this specific funder. So really quickly, I can see what they are giving every year.
And the next one to the right, we can also see geographic focus. We can dig in deeper, specifically, what organizations in those geographical locations are these folks funding. And then in the bottom, you're also looking at some information on competition. So, how is this organization prioritizing new grantees versus past grantees? And my little pro tip here is anything over about 30% new grantees is a go. Anything under 30% is going to be a little bit more competitive, right? So essentially, what we're doing here is we're just taking all of that multi year data, which would require lots of digging through 990s, and visualizing it for you so that you can see it.
But then, of course, sort of once you have figured that out, there's the need to communicate all this information within your team. So assigning tasks quickly, making sure that your team has all the information they need in terms of notes and all the documents that potentially they need. So whether that's templates or impact studies, or performance indicators. All of that information is going to be right there for you.
And then, of course, once you get to the post word grant phase, you want to make sure that you're tracking all of those deadlines and that you can really quickly see what is what. And so, we're also just going to help you stay organized, whether that's through a sort of traditional tracker that you see in the background here, right, where we're just listing everything that's going on, and sort of the status. Or if It's in a calendar view, like you see kind of upfront. Or if it's in our reporting tool, which allows you to really quickly take a PDF or CSV file that you can share with your board or at agenda meetings just to kind of keep everybody on track.
So, essentially, that's kind of my spiel. I will say, one thing to note, we did launch a couple of new features recently. And so, we've been giving away 14 days free for anybody who wants to try out those new features. I don't know how long that's going to last. So if you've been thinking about it, you might get in there and sort of pop around with some of our extra features, like our calendar view or application cycles, which lets you see things like proposal due dates, alongside LOI due dates. And then if you're a consultant, we can definitely talk about that as well.
So, I am going to--I will just drop our signup link there in the chat. If you are interested, this is a great time to try out those new features. If you're interested in grants but maybe you're not quite ready for a tool like this, you'll definitely want to stick with us to the end because we are giving away a freebie, a couple of freebies. The first thing is some really awesome templates from Lori. So, you definitely want to make sure you stick around for that. And the second thing is our 40-page guide to evaluating RFPs. So maybe you're not quite ready for a tool like Instrumentl, but you're still thinking about grants, you'll definitely want to kind of stay with us throughout all of that.
So with that housekeeping all out of the way, I'm going to go ahead and introduce our guest, Lori. And, Lori, you could share your screen whenever you're ready. I'm excited talking about performance indicators.
As a reminder, we are accepting questions. You can always drop those in the chat if you want. Just put those three hashtags in front of them for me. But I'm really excited to introduce Lori Foltz Fabian. Lori has over 30 years of experience working with nonprofit organizations nationwide with particular focus in the mid-Atlantic. Like she said, she's in Virginia. And as a professional Grants Manager, writer, and consultant, she has helped nonprofit organizations raise over 75 million in foundation for government support. She's also provided--
Lori: Up to 82 now.
Celia: 82, all right. Nice. We'll have to update that.
She's also provided grants research, training, council to hundreds of agencies, a specialist specifically in outcome measurement and program evaluation since 1997. She has spearheaded external evaluation efforts for a wide variety of programs and projects, including very interestingly enough, a countrywide assessment of food insecurity in Passaic County and a nationally--
Lori: Countywide, yeah.
Celia: Yep. And a national evaluation of a long-standing program for the foundation center. So, Lori, we're so happy to have you here today. I will be quiet now and I will let you kind of take over as well.
Lori: Well, thank you so much, Celia. And thank you for this opportunity, guys. I got to say what Celia was showing you with Instrumentl, it's a really good grant management tool I have found. It's so integrated. I do really suggest that you take a look at it if you haven't already got it.
Well, thank you for the kind introduction, Celia. Let me advance here. Yeah, this is who I am. Yeah, I've been at this a long time.
And I think that one of the things that I bring that's kind of unique is that I have seen the evaluation process from both sides. I've written the grants and the performance indicators and designed the evaluations. And then I've had to execute it, on the other hand, as an evaluator for projects. And so, I remember when I was first starting to do the program evaluation and that was kind of before the Feds had the requirement that if you wrote the proposal, you couldn't be the evaluator. So, this was some time ago.
I remember telling my client, “You really should shoot your grant writer for writing these performance indicators this way.” So, I have learned through good and bad experience what works and what doesn't work as far as writing good performance indicators. And I'm hoping that I will spare you some of the hurdles that I have encountered along the way.
Celia is going to be monitoring to chat for me. There will be several spots in here in which we invite you to put input in. And what she is going to do is she's going to kind of summarize or highlight certain things that come up in the chat in the interest of time. Also, if you have any questions, please put them in the chat and Celia will be directing those. We'll try to address a bunch of those at the end in the discussion section. And I think with that, is there anything anybody needs before we begin? Okay, let's get going.
Okay. We're going to start out with a poll. Are you an in-house--I saw in the chat, a bunch of people said that they were consultants, other people said they were with organizations. But this will be in the form of a poll. Are you an in-house or external grant writer? And do you write primarily for private? That would be corporate or foundation. Or public government grants? Which would be what you do more. And, Celia, you can do that poll. Here we go. Sorry about that.
Celia: Yeah, answer the poll. This will be a really good way just to kind of get a good look at everybody in terms of where people are at? Oh, wow, we're already up to 50%. You guys are quick. This is a quick group.
Lori: I like that.
Celia: And we're seeing some stuff in the chat too in-house, both public and private. I'm both and all from Arnold. Yeah. Marie is a new grant writing in-house private. Lots of both. Lots of both. That's interesting. It's cool. All right, I'll give it another 15 seconds or so. If you haven't answered in the poll, go ahead and do that.
Yeah, Martha. Yeah, we need to work on these poll options. They only let you choose one. All right. All right. Cool. I think we're going to go ahead and end the poll. Let's see what the results are. Let’s see what that looks like.
Lori: I see most people are in-house and most people are, right, for private. One thing that I've noticed--and I do a whole lot of government grant writing, federal state. We also do the foundation corporate. But we're kind of known for the larger federal grants for working on the larger federal grants. The performance indicator information I'm going to promote and give you is designed to satisfy the pickiest funders. Okay?
So hopefully, by the time we get finished with this, you'll be able to satisfy the most nitpicky. And I'm assuming that would be at the federal level, sometimes the state is worse. But more and more, the private foundations--and especially the corporations, are also looking for precision in as far as the performance indicators go. But I actually had just this last week, one funder suggested that we make our indicator a little bit more vague, which was a first for me. So really, long story short, know your funder. And that's where Instrumentl will help you get the relationships going to where you know those nuances.
So, I just wanted to give that little caveat to begin with. So, okay. So, let's continue on. Let's go on here. So, here's what I plan to cover. Number one, I'm going to talk about definitions because the whole Program Evaluation Field in the nonprofits has its roots in so many different fields that there's a lot of confusion with jargon. I'm going to explain to you exactly what I mean when I'm talking about performance indicators. We're going to talk about what's acceptable versus what really has some power. We're going to give you some very specific templates. We're going to go over how, just like fill in the blanks on how to write particular types of performance indicators. And you'll also, as silly as that, get this in a freebie form at the end and a document.
Hazards, these are the things that I have run across and tripped over over the years that I want you to be able to avoid. And then finally, you can't look at performance indicators in a vacuum. They have to be part of the whole proposal. They have to work with the whole proposal. So, we're going to look at performance indicators in context of your grant proposal and also of the program.
So, first thing. What do I mean by performance indicators? They’re statements of quantifiable information. And they're designed to determine how well you meet specific goals. Okay? They've been called all kinds of different things, objectives, outputs, indicators, performance measures, KPIs. But here's what they are. In the grant proposals, usually, we are asked to provide targets or indicators for how hard we work or how hard the program works, the agency effort, the consumer response to that effort, how many people enroll or participate, the expected outcomes, the results of your intervention, and then more and more on finding that funders are asking for quality assurance indicators and targets. And this includes satisfaction. And we'll talk about it a little bit more. Satisfaction is not an outcome. That's a common mistake that people make.
So, what are the outcomes? I use the acronym BACKS. It's a positive and intended change. In either the behavior, the attitudes or beliefs, the condition, or status, knowledge, or the skills of the person or of the subject your program seeks to improve. Now, in the case of, if you are doing environmental work, like wetlands or something like that, or working with animals, you're going to look more at condition or status. And by condition or status, I mean, they were homeless, they were now housed. They dropped out, now they're back in school. They were in a hospital. They were in poor status, and now they're good. So, that's what I mean by that. Okay.
Any clarification I need to do there? Are you all good? Okay. Okay. So, I'm sure everybody has heard of a smart objective or a smart indicator. It's got to be specific, measurable, attainable, relevant or realistic, and time limited, or time framed. Okay. Everybody has heard of that.
Although I have seen people write what they called smart indicators that were not one of these things. So just if you're going to write an indicator, make sure you've got all that. And if you do the templates, if you use the templates I'm going to give you, they will automatically be this.
But the performance indicators that really sell your grant proposal exactly match the funder needs and requirements. I mean, exactly. Use as objective measure as you can possibly use for that particular target. They're impressive to the reader while still remaining credible. You can't have him going, “Really? You're going to achieve that?”
Complete and understandable, not a lot of jargon. And for the most part, they always use active voice. Again, there are exceptions in the case where you're using non-human subjects. Now I know I'm preaching to the choir because you guys obviously understand that performance indicators are important or you wouldn't be here. But just in case you need a little convincing or if you want to have reasons that you can give to program or administrative staff around this or feedback, I have seen organizations actually lose money, having risk, having gotten a major grant. Because the way their performance indicators were written, they had to do a lot of stuff they didn't budget for. Because of the way the performance indicator was written, they were committed to it and they had to do it. Also, if you write your performance indicators poorly as a grant writer, your people might not follow through. If you don't write it right, then you might be promising something they can't do, as far as either program, administration, or data collection.
Finally, if you've got a multiyear grant, like a five-year federal grant and you've written your performance indicators badly, most of the time, or a lot of the times, your program officer will not let you change them. So, you need to get it right the first time or you could be stuck with those for the entire five-year period of the grant.
More consequences. I have heard--I've not experienced it personally. But I knew somebody who had a five-year grant and their staff kind of quit collecting information after the second year in the way that they had said they were going to in the grant proposal. At the end of the five-year, the funder demanded the last three years of the grant back. They were able to talk their way out of it. But that was a bad moment.
Another thing is, if things don't work well with your performance indicators, this agency may never find you again. And since they often talk to each other, foundations and corporations, they know each other. Even the government agencies, often they talk. So, you may be denied grants from other agencies. So, it is very important to write your performance indicators well.
So, if you could put in the chat, have you personally experienced these or any other difficulties around performance indicators? Anything specific that you want to share? Just take a second and think about it. And if you have, pop that in. And, Celia, when we get to the end, we can talk about that. Okay?
Celia: Yeah, I'll grab anything. So, definitely drop it in there.
Lori: Okay. Great. Thank you.
So now, we get to the recipes, the templates. So, these are some basic templates for smart indicators. Here is, if you need to quantify or set a target for your level of activity, your output, your agency effort, level of service. Program name and/or, hang on, designated staff position will provide or perform how many defined actions to or upon your defined subject by or within what timeframe. Okay? That's to show the level of effort, output.
Now, this is also output. But this has to do with consumer response to your effort. So number of--define who your participant group is--and we're going to talk more about how to do that later--will enroll, attend, or participate in program activity over or within a certain timeframe.
Now, again, if it's about non-human subjects, you have to use a passive voice. Number of subjects will have this activity performed upon them by either the program or a designated staff over or within a timeframe. Okay?
So again, that's another output. You actually can combine that into a single output indicator. Program name and/or designated staff will provide or perform a number of actions to or upon a number of subjects by or within a timeframe. So, that just takes the first two and squishes them together.
Outcome, we have a few different types here. This is kind of your basic one.
Number out of number, this is your universe. This is the number you'd expect to achieve. Also give the percent of participant groups in the program will achieve, what, by when? Okay? Now, this is just kind of a general one.
Now, sometimes, like for example, if you've got a program that's working with people who will be participating in or working on certain things but not others, or achieving certain things, but not others in the program. Say you've got a job training program and some of them need help in writing a resume, others don't. Some of them need help in achieving a GED or a certification.
Anyway, sometimes if there's a variety of different success indicators that they could have, you can use the menu style. So in this case, it would be number out of number, of percent, of population and the program will achieve what is an acceptable number of the specific outcomes of the following by, what time? And then you have the various desired outcomes. If it's just one, that's fine, as long as you say what the acceptable number is. If it's at least two, then you can do that. Can you see how this could be useful under certain circumstances? Okay.
And if you have any questions again, please pop that in the chat. I'll be happy to clarify more later.
Now, in many cases, you've got an organization or program in which a family or an individual could come in with any number of presenting problems. And the whole idea is to provide case management to address whatever their needs and goals are, their individual needs and goals are. So, you can't obviously list every single possible option. So, you can't really use a menu here. But in most of these kinds of cases, you will be--your case managers will be developing an individual progress plan or an individual service plan, an ISP. And so, to indicate success on this, you can say number, percent, of, define your subjects, completing an individualized case plan, in the name of the program, will--and you can do one of two things here. Or you could actually combine them if you wanted to. You could say they will achieve their primary case goal, because usually they have one overarching goal, one thing that they really, really, really want to achieve, and/or achieve X percent of their case goals within a timeframe.
Now, why do we say percent rather than number of case goals? It's because they might only have one. They might only have one knee. They might have 10. So, that's why we look at a percentage of their case goals as a measure of success instead of a number, because they could vary widely from case to case. Does that make sense? Okay.
Now, here's satisfaction. Again, satisfaction is an indicator of program quality. It is not an indicator of success as far as your outcomes go. And the example I’d like to use is, little Johnny might absolutely love his math tutor and give him a glowing review. But he still could be failing math. Can you see why the two are different? This has to do with the perception of the consumer on the services you provide, not whether or not that service actually achieved what it was trying to achieve.
So, the way that you can write this kind of indicator--and you don't have to scribble these down, because like I said, we're going to give them to you. Number, percent, of, define your subjects. You need to say specifically the number of percent completing a satisfaction survey at a particular point in time. Is it during--is it immediately following service? Is that a year out? Is it in the middle? Will indicate either high or very high levels of satisfaction with the program, or a specific part of the program.
Now, I would also suggest high or very high levels of satisfaction. You could put in here, as indicated by responses to relevant questions on a survey, or something like that. And we'll talk about that with the add-ons. You can specify what you mean by higher high levels of satisfaction? So, we'll talk about that in a minute.
Here's your add-ons. You won't need these for all of them. But some funders will want you to be able to say and maintain those gains for a certain amount of time frame. For example, several federally funded grants require you to follow up with them in six months or so after they have completed the program. Some of them want you to compare to a baseline, especially if you're doing a demonstration project. Some of them want you to define your geographical area. And others want you to say as verified by and your data collection method. So, those are add-ons that you can tack on to any of those templates depending on what you need.
Okay. So in the chat box--and, Celia, if you could kind of take a look at this and let's talk about it in real time after we get a few responses.
Lori: Where do you see some traps in this example?
Celia: Let's see. We've got, which survey a trap could be not all participants fill out the satisfaction survey.
Lori: That's a big one. Yeah. So, it's not just participants, it's participants completing a satisfaction survey.
Celia: Yep, 90% is pretty high. It doesn't say when the survey will be administered or when it will fall in.
Lori: That's another one is it's fine if this indicator with those corrections would be fine if this is supposed to be a quality indicator. But if you're trying to pass this off as an outcome, not so much.
Celia: Yeah. We also have, what's the program?
Lori: Yeah. There you go.
And also, yeah, are they talking about the program in general? Or are they talking about an element of the program differently--because the satisfaction survey isn't usually a one question thing. How did you like the program? It usually breaks down. So, okay, very good. You guys caught it well.
Now, here are the potholes. Some of the hazards to avoid. Here's a big one that I found. Are you booking? Do you lack--if you don't have numbers in there, it's not very specific. Are you rambling? Are you using jargon? Use ambiguous wording. I have seen clients’ draft indicators for me that really didn't promise a darn thing. And I'll give you an example.
During the year--oops, sorry. Back it up there. I can't see the bottom because of my control bar there. During the year, collaborative efforts will continue to form and expense partnerships for reapplication, as evidenced by evaluations and program portfolios, blah, blah, blah, blah, blah. This is not a good performance indicator. This is a lot of fluff. Okay?
Are you cramming? I have found even RFP writers do this. They'll write an indicator. But it's got more than one piece of information in there. And the different pieces of it might have different answers. So, how in the world are you supposed to report on that? This is an actual example. At least 80% of students will come to school motivated to learn and get along well with other students. So, this kind of gives you a quandary here. Do you only count the number of students who come to school motivated to learn and who get along well with other students? Or do you count the ones who come to school motivated to learn, but don't necessarily get along with other students? Or vice versa? So make sure when you write your performance indicator, each one is measuring only one thing. Okay?
Now, here's--it was kind of alluded to earlier, like with the satisfaction survey where we said 90% of participants. And that kind of did not acknowledge that all of the other participants might not fill out a survey. I see a lot of cockeyed optimism in my day, and I've been guilty of it too. There is a drop off always with your numbers, from outreach to sign up, to program completion or participation to completion and then to follow up. Okay? There's always a drop.
So, for example, say you're running a foster care child placement agency. Your effort, you provide outreach. Put out a whole bunch of ads, visits to churches, solicitations, social media. And then you get a response. Families inquire about the process. So then, you conduct orientation. So, the number of people you reach out to is going to be much larger than the number of people who actually inquire. And then you're going to conduct orientation sessions. Then a subset of the people who inquired will attend the orientation.
You will then explain and assist with applications for those families who want to be considered to be host families. So, this group will be a smaller number than that. You'll perform a home study process on those families that submitted the application. Only a certain percentage of those will be actually licensed and accepted as resource families, then you'll identify and match children. And again, you can see that here. Only a number of the licensed families will actually have child placements. Do you see how this funnels down? When you're writing your performance indicators, you have to always keep that funnel process in mind.
So, that leads to this, how are you cutting the herds? Of all the subjects who have had contact with your program, which of those subgroups is your indicator measuring? Okay? Like for example, you got to define your universe. Is your indicator or target referring to subjects who have been exposed to information, attended an info session, completed an intake form. A lot of people who complete an intake or enrollment don't show up, attended at least one session, remain involved for a certain amount of time, successfully completed the program, done a follow up, completed an exit survey or interview, or completed a follow-up survey or interview after exit. So you can see, again, you've got a big funnel down here. So when you're writing your performance indicators, keep that in mind.
Do you have the right fit? This is a little different thing. And this is where Instrumentl is very, very helpful. Does your indicator measure what the funder wants to know or what you want to tell them? Ideally, it should do both. But if you had to choose between one or the other, you want to make sure that you are able to tell the founder what they want and need to know. And then is your data collection instrument appropriate to the kind of target that you're or the kind of performance indicator that you're writing? So, it's two different types of fit. One is, does your indicator fit what the funder wants to know? Secondly, are you measuring that indicator with an appropriate instrument? And I could do a whole half day seminar and half done on developing appropriate in data collection instruments.
So, how to find good fit funders? Obviously, you can use research to evaluate the good fit. Consider competitiveness. Celia, can you address a little bit more how they can do this with Instrumentl?
Celia: Yeah, absolutely. Yeah, I'm just going to add myself here. Yeah. So, Instrumentl. And I think I showed this on our--when I was sort of showing this up at the top. But not only are we--so essentially, when you set up, you're setting up a project. So not only are we kind of using algorithm in order to pull out folks that we think are a good fit based on kind of what has worked in the past with other organizations that are similar, but also just based on the criteria that were given. So, we're doing that. And that's running in the background. So, you can kind of set it up and then leave it alone and come back. And you'll always have good fit there.
But then beyond that, we're actually digging into a lot of that data, which I know grant writers use, like 990 data for foundations is such a wealth of information. But it is just so painful to go through to, right? And to find out how an organization has changed their strategy over a couple of years means going through multiple years of 990 data on a single foundation. So, we're taking all that data and we're turning it into really quick visualization.
So within five seconds, you can see who have they funded over the last few years. And what locations? What percentage of new grantees versus old grantees? What's the median amount? So, you're able to answer those questions just super fast in order to kind of decide if it's a good fit in order to keep move in or double down. So, yeah.
Lori: Yeah, that's one thing that I found in doing grant research is I always look to see if organizations similar to ours have received funding. And also, what size grants the organizations similar to ours have received. Because they may have a grant range where they've given $2 million grants, but that might only be to a college or university, whereas to a small local organization that might give $10,000 grants. So, that's where this is really important.
Sometimes you can connect with other applicants. Like if you have a relationship with them or if they're not going to be competing with you on this particular application, you can find out some of their prior grantees and you can talk to them. And always, always, always, especially with the private funders and especially with the smaller funders, relationships are important, really important at the private level I find. And so, take every opportunity that you can to interface with them. A lot of times they'll say, “No, we don't want to talk to you. Just send us an LOI, or whatever.” But if you can develop that relationship, do everything you can to do so.
Also, with the very largest ones, like with the federal grants and with a lot of the state grants, they will have designated program people whose job is to be helpful and answer questions for you. Don't be afraid to reach out to them.
So when you're trying to figure out how are we going to get this data? You need to ask yourself, “Okay, who do I need to get this? Who will be providing this information? Who's the best source of the data to measure this indicator that I have written? What am I trying to measure? Am I doing outcomes? Am I measuring processes? Participation? Expenditures? How can we best collect the data?” And I have some suggestions, at least with outcomes.
Really, behaviors… Behaviors are something that you see or that somebody sees. So, the only way is to really measure behaviors. And I'm not talking about perceptions of behaviors. But the only way is that you can really look at behaviors objectively is to have somebody observe those behaviors, preferably with a standardized rubric that has certain things that you're looking for. Okay?
Or you can get an involved parties observation, like with its case of a child or some other--or a non-human subject, a teacher, a parent. Okay? They can report on behavior. Attitudes, beliefs, or perceptions. The only way that you know a person's attitude, belief, or perception is to ask them directly. They're the only person who knows that.
I had one client tell me one time. They say, “Oh, yeah, I know what their attitude is. I can tell by the way they act.” No, you can't. You don't know what their motivations are for their behaviors. You can report on their behaviors, but you don't know their attitudes or beliefs that are behind those behaviors. They might surprise you.
So, the way that you ask people, you can either do it in writing through a survey, you can gather a group of people, you can do a focus group and ask them that way. Or you can do one-on-one interviews. Those are pretty much the ways that you measure attitudes, beliefs, or perceptions, conditions or status. Again, especially when you're talking about non-human subjects, say you're looking to see the status of a vacant lot that's in the--much trash there is in it, or whatever. You can do an observation.
Like, with newborns, the Apgar score, there's weights and measures that you can use for status. Blood pressure, blood sugar, HbA1c, those sorts of medical things, other weights and measures. You can also examine artifacts, pieces of evidence, documents, social media posts, anything like that. Those are things that you can look at to look at condition or status.
Knowledge, kind of the gold standard on whether they've gained the knowledge as a pretest/post-test. You see where they were at the beginning before your intervention, you see where there are afterwards. Now in some cases, with knowledge, you can do and you can have an oral interview with the subject to see if they demonstrate mastery of knowledge or they can do a presentation. These are some ways you can measure knowledge.
And again, skills, very similar. They can do a pretest/post-test on various skills. They can do a demonstration, or a trained observer can observe them performing a certain task to see how well they do it. So, these are some common ways of measuring those BACKS.
Another hazard to avoid. Have you really considered how much trouble the time, the effort, and the resources that are needed to collect, combine, and analyze the data that you've put in your indicator? Remember what I said back at the beginning that I've seen grantees lose money? Because they didn't do this. They did not realize with the performance indicators that they had that it was going to cost them a lot of staff time to measure it the way that they had written it.
So when you're thinking about it, who's going to be the person who has to collect this data? Is it going to be your program person? Is it going to be a supervisor? Is it going to be the client themself filling something out? How long will it take? How often does it need to happen? Once the raw data is collected, where does it go next? Do you have a pile of paper surveys, which then have to be put into some sort of database or Excel spreadsheet or something in order to be able to analyze it? Who's going to do that? Who is going to compile the data to create a report? And how? How and where is the data going to be piled? Do you have a database you can use? You have an electronic health record, Excel spreadsheet, even a paper tally form with a checklist? How much time is that part going to take? And how often does that need to happen? Do you have to do monthly reports, quarterly reports, semi-annual reports, annual reports, five-year reports? Those are things to think about.
Now, this is more often in clinical settings, like behavioral health care settings or healthcare settings. But oftentimes, if it's an evidence-based data collection instrument that's been mandated that you use, a lot of times you have to buy it. And oftentimes, a staff person has to be trained and certified in its use. Again, this is mostly clinical settings. And those training costs can cost thousands of dollars because--well, pre-COVID, they would fly people out to Denver, or wherever, for five days to do five days of intensive training. Now, more often, they do it online. But still, sometimes they require on-site training.
Sometimes only certain licensed professionals are allowed to administer certain data collection tools or screening tools or diagnostic tools. So, you need to make sure if you have something mandated that in your grant proposal and in your program design, you have those licensed professionals in your budget. If you're going to create an instrument yourself, please don't create an instrument yourself unless you've got somebody who really, really, really knows how to do this. Or you can hire somebody like me, or somebody else who can help you design instruments.
But if you need to create an instrument, who's going to do that? How long is it going to create it and test it? You need to have a beta version before you go live. If you need to bring in a consultant, how much are they going to charge? And then finally, I'm working right now on an evaluation for a certified community behavioral health care center. And they had to buy a new electronic health record system in order to be able to measure and report on some of the required indicators for the grant. So, those are some things you need to think about for cost considerations.
Here's another issue as far as competitiveness goes. Have you set the bar too high? I've had people just promise the moon and say, “Oh, yeah, we can do that.” And this is also really hard when grant writers are working kind of in a vacuum and you can't get your program people to cooperate with you and help you out. You have to make sure that you haven't set targets that might not be achievable. Because if you do, you could be setting your program up for at least perceived failure. So, be careful not to set your bar too high.
On the other hand, don't set the bar too low. Make sure your targets are either to the best of your knowledge, looking at prior grants, if you can. Are your targets higher or similar to those other people who have gotten this funding? And if not, are you really going to be competitive for this? And you might have a very valid reason for having lower targets, maybe it's a smaller program, maybe you have a more difficult population. But in your grant proposal, you'll need to be able to specify why you should still be considered even though your targets are relatively low. But think about it first so that you can anticipate and head off that objection by the funder.
Celia: I didn’t mean to butt in here. I just want to make sure we’re at the top of the hour. And I know some people have to jump off. So, I wanted to share any final thoughts?
Lori: Okay, okay. Okay. I will jump. Sorry, I didn't get the warning. Okay.
Finally, don't make your program accountable for something that you can't impact. So, for example, as result of distributing informational brochures, heart attack, death rates will be reduced by 5% in the metro area in five years. This is an actual indicator that a client came to beat me with that the funder was upset with them and I had to negotiate a different indicator for them.
Finally, don't make things up. Work with your program people, involve staff, document their sign off. And after the grant award, review at all with them before they get started. Does the rest of your proposal make it believable? Do you have too many indicators? And are you creeping the mission?
So, I guess, we don't really have time to share stories right now about hazards. We've got the follow up. Celia, you want to address the freebies?
Celia: Yep. It'll be in the email as well. And I dropped a link. But essentially, fill out that form and we'll send you her template as well as a great tool as well.
Lori: Okay. Okay. And then here are some upcoming workshops from Instrumentl.
Celia: Yep, that'll be in the email too. So, don't worry about it if you don't get a chance.
Lori: Okay. So now, for those of you who are able to stick around for a minute or two, Celia, are there any burning things that--or questions that you think I should address right now?
Celia: Yeah, let's hop into those. And then the other thing we can do, Lori, is I'll send these to you. So if you want to write out any answers, we can also send those along to participants.
Lori: Yeah, I can write up an FAQ.
Celia: Let’s take maybe five minutes. So anybody who wants to stick around for another five minutes, we'll keep going.
All right. We had a question about whether or not you suggest putting performance indicators in letters of interest?
Lori: It depends on the funder. If you're talking about a major federal grant, I would say--actually, no, I'm sorry. Now that I think about it, I think it's too early once you get to an LOI because a lot of times, you've only spoken with the program people to get a very general idea of your program. And I think you need to go into much more detailed conversations with them before you put that in writing and give it to a funder.
Celia: Yeah, that's great. There's another question, will you discuss how to develop a pilot program proposal for funding? Maybe performance indicator.
Lori: A pilot.
Lori: A pilot or a demonstration program?
Lori: Okay. In this case, you need to describe very well. Well, I can hear a couple of different concerns in there. So, I'm not sure if I'm addressing the correct one. With a pilot program, you don't know what your targets are going to be as far as results go because it's a demonstration program. So, I think what's important with that is that you describe--you set targets for your processes more, your activities, your outputs, how much outreach you're going to do, what your methodologies are going to be. And then you can give a hypothesis as far as what you think your outcomes are going to be. And you can write, we estimate that approximately X will achieve this. But I would not phrase it in such a way as to promise it. You can say, “These are the anticipated results. However”--
Lori: And then we will report.
Celia: Sure, that makes sense. Last question, and then we'll--and then we can follow up on any of these other ones as needed, is how many achievable outcomes do funders want to see per program?
Lori: You know what? Don't kill your program people. Use as few as you can get away with. For each of the particular goals that the funder sets out or that you set out, oftentimes, you don't need more than one. Sometimes, if it's kind of a nuanced goal that has more than one element to it, you might need a couple. But keep it to the minimum. You really don't want to kill your program people. They want to be doing good in the world, not measuring stuff.
Celia: That makes sense. Okay. I think we're going to have to cut it there. But we will follow up with everybody. And we'll make sure that you kind of have everything that you need, including the deck, the recording. So, just keep an eye out for an email from us.
The deck will also include all of Lori's contact information. I know that there's a lot of people on here who are asking for that. Lori, if you want to drop it in the chat, you can do that as well.
Celia: But, yeah, I think that this was--we really appreciate it. Such a good discussion. Well, I couldn't get through it all because there's so much good information in there. So, I will follow up with everyone with any additional information. All of those templates, by the way, are in that freebies link. This will be in the email that I send you as well. But you can also click that little link right there. Just give us a little bit of feedback on the program, or I'm sorry, on the webinar, invite some friends if you think you have some friends who might be interested in another webinar. And then we will make sure that you get those straight to your inbox.
So, yeah, that's it. And, Lori, thank you so much again for being here. This has been great.
Lori: It's my absolute pleasure. And I put my email address in the chat. And it's also going to be in the deck when you receive it. So, please feel free to drop me an email. If you have any further questions, I will be more than happy to talk to you.
Celia: Awesome. Thanks, Lori.
Lori: My pleasure. Thank you, everybody.