U.S. flag

An official website of the United States government, Department of Justice.

Transcript: Webinar on the National Institute of Justice Innovations in Measuring Community Perceptions Challenge

Review the Challenge

Challenge Closed

Thank you to all submitting teams. We anticipate announcing winning teams on November 1, 2023. 

On June 6, 2023, NIJ held a webinar to provide an overview of the NIJ Innovations in Measuring Community Perceptions Challenge, which NIJ launched to address the challenges of collecting consistent, rigorous measurements of community views, in particular of those living where crime and police presence are more intensive and who are less likely to participate in surveys or community meetings. Following is a transcript of that webinar.


STACY LEE: Good afternoon and welcome to the National Institute of Justice Innovations in Measuring Community Perceptions Challenge webinar. It's my pleasure to introduce Dr. Elizabeth Groff, Senior Advisor at the National Institute of Justice.

ELIZABETH GROFF: Good afternoon and welcome. I'm joined today by NIJ Director Nancy La Vigne and NIJ Computer Scientist Joel Hunt. Let me take you through the agenda for today. We're going to begin the webinar with some introductory remarks from Director La Vigne who was the inspiration for challenge and who has been deeply involved in its creation. Then we'll discuss the motivation for the challenge and describe the set of core requirements we have for any proposed method. From there, we'll explain the structure of the challenge and provide some information about what to put in your entry and how it will be judged. Next, we'll describe the prizes that are up for grabs, the three components of the submission, and who's eligible to enter. We'll end with an opportunity for you to ask questions. At this point, I'm going to turn it over to Director La Vigne for her introductory remarks. Director?

NANCY LA VIGNE: Hello, everybody. Thanks for joining this webinar. This is the culmination of many months of work. I want to thank Dr. Groff and also my colleague Joel Hunt for contributing to this, and a whole bunch of other people both at NIJ, as well as our colleagues at the Bureau of Justice Statistics. There’s been a lot of people who have chimed in on how to develop this challenge. NIJ's challenges are an exciting, and for us, a relatively innovative way to spur new ideas and new creativity in the field. Most of the research and development work that we do is supported through grant solicitations that occur on an annual basis, whereby everyone's waiting for a new solicitation to come out and they have roughly 90 days to respond. It's a big application. There's a lot of process and some degree of administrative burden associated with submitting a proposal and then only a certain number of proposals are accepted for award, and only after that award is in place does the actual research start.

Challenges are different in that we are challenging the field to bring new and innovative methods to bear on particular issues that are thorny, and those that submit the best approaches will receive cash prizes and recognition. Like I said, we've done a few of these in the past. We're really excited about this one for today. Liz will go into more detail about the nature of this particular challenge. Of course, I'm sure if you're on this call, you've already read the written documentation.

But I did want to spend a little bit of time sharing why we're doing this and why it's important to us and important to me as NIJ's Director. Namely that it's been a real source of frustration for me, over the years, that we don't, as a research community, have a suitable and satisfactory way of accurately representing the opinions and perceptions of people with regard to what they think about police and how they've experienced being policed in their communities. Most of the methods have biased samples or underrepresent the people that I would argue matter most in this context of policing in this day and age, with trust that has been eroded over time. And that’s people who live in the highest crime, most heavily policed communities, which are largely communities of color – communities that under-resourced in many ways. Although arguably over policed in other ways.

So ironically, the people that I think matter most in terms of building trust from the ground up are those that are least likely to be represented through traditional methodologies and in particular those that really get at microgeographies and even microdemographics. So, a lot of people talk about the community and are imprecise in what they mean by that. A community could be an entire jurisdiction. It could be a police reporting area or a beat, it could be a census tract, it could be a ZIP code, and all of those areas are really large based on what we know about how crime and all manner of phenomenon cluster and they tend to cluster right down to the street segment level. And that's where you'll see variations and experiences that are often just a wash when you aggregate up. Microgeography is a big part of this and we recognize that issues around identifiability are also a challenge. So this truly is a challenge. It's a challenging challenge and we're so excited to invite the best minds in the business to submit your greatest ideas, and we're eager to review them. We recognize you may have some questions, so we hope we have time to clarify what we're looking for as well as to answer any questions you may have. So, Liz, I hope I didn't steal too much thunder in that introduction since I didn't tell you what I was going to say, but I'll turn it back to you and I'm going to remain on in listening mode and help out with any Q&A portion of this that I can. Thank you.

ELIZABETH GROFF: Not at all. That was a wonderful introduction. Thank you. I want to emphasize that this webinar is providing an overview of the challenge website itself. All the details are on the website, and I urge you to look at it. Let's begin with some information about our motivations for offering this prize challenge. It starts, as the Director said, with the core belief that consistent, rigorous measurement of community perceptions can provide critical feedback to police, city managers, and community members. Such feedback can support the development of new policies and programs, as well as the evaluation of current practices. Although probability surveys have long been the gold standard for measuring community perception, they've also been expensive and time-consuming, which meant that they were not deployed frequently, if at all. In addition, technological and cultural shifts are leading to lower response rates in making probability surveys more vulnerable to nonresponse area error, more time-consuming and more expensive. This has led to greater use of nonprobability surveys, but they also have significant drawbacks for measuring community perceptions. At the same time, new methods have emerged that leverage the increased accessibility of big datasets to gauge community perceptions. Our goal with the challenge is to identify innovative survey and big data methods for measuring community perceptions. Next, let's discuss the core characteristics we're looking for in proposed methods.

The two methods proposed under this challenge should have the following characteristics. They should accurately represent the characteristics of the community on the key dimensions of race, ethnicity, age, and gender. They should be cost-effective to allow for frequent measurement. They should produce estimates of small geographies to reveal differences in the patterns of perceptions and changes in those patterns. They should be capable of low-burden administration so that they can be conducted on a systematic basis, and they should be scalable for use in jurisdictions of various sizes.

At this point, let's turn to the details of the challenge itself. The structure of the challenge reflects our desire to solicit a wide variety of innovative solutions. To facilitate comparing entries using similar methods, entries will be accepted under two major categories. Each category measures the same five constructs. Those two categories are Category 1 for surveys and Category 2 for data. Within the survey category (Category 1), all entries should use a survey instrument. There'll be two subcategories, one for probability surveys and one for nonprobability surveys.

Within Category 2 for data, the emphasis is on gathering information without using direct contact with the members of the community. Entries might use social media administrative data and other proxy data. They could use a variety of different tools and techniques such as natural language processing, for example, using large language models. But we encourage you to bring us your best ideas, no matter the tools or the technologies. Each entry must identify the construct being measured and the data being used to represent it. Within Category 2 for data, there's an overall competition in which entries must address each of the five constructs. So you'd be submitting a set of data and method(s) so that you can measure each of those five constructs.

Within Category 2, there is also an individual construct competition, each entry would address one and only one construct. Say for instance you had a good measure of perceptions of the community policing, if that's the only one that you have, then you would enter under the community policing construct. You can compete in both overall and individual subcategories, but each entry needs to be unique. You can also compete in more than one of the individual constructs, but you can only have one entry per construct.

At this point, let's turn to the prizes and how they link to that organization. We have a total of $175,000 in prize money. That breaks down to $87,500 for Category 1 and for Category 2. They're weighted equally in the competition. Three of the four subcategories have the same prize structure that has a first prize of $25,000 all the way down to a fifth prize of $2,500. The only subcategory that has a different structure is the individual constructs. And those range from $5,000 for first prize to $500 for fifth prize.

Let's now spend a few minutes discussing what you need to have in your entry and the judging. Generally, all challenge entries must demonstrate appropriate knowledge of applicable data sets and methods. They need to provide a detailed overview of the proposed method and how it satisfies each of the required criteria. Any evidence that the proposed method has been used successfully in analogous scenarios will strengthen your proposal. So what does the judging look like for Category 1 and Category 2 entries?

For Category 1 entries, again these will be surveys, the judging will be across six different criteria. Related to representativeness, entries should include specific process and outcome measures, which can be used to accurately measure representativeness. They should provide a clear and specific description of all costs related to deployment method, including a total cost per data collection. Entries should clearly identify the smallest geography and how accuracy will be established at that geography. Related to frequent administration, entries should identify the length of time and the number of resources necessary to deploy the survey. Entries should describe how the survey can be deployed in jurisdictions of varying populations and sizes. And finally, each entry should describe how the method will measure perceptions while protecting privacy.

Turning now to Category 2. These will use existing data. The judging will be on the same six criteria. But it will be slightly different because we're dealing with proxy data rather than survey instruments. Related to representativeness, entries should describe how well the proxy measures represent the constructs across the entire community. They should provide a clear and specific description of all costs related to deploying the method, including an estimated cost per capita for a community of 100,000 people. We ask for that information to try to standardize the cost across different entries. Entries should clearly identify the smallest geography and how accuracy will be established at that geography. Related to sustainability, entries should identify the method's ability to support continuous or frequent data collection. Entries should describe how the method can be deployed in jurisdictions of various populations and demographic structures. And finally, entries should describe how perceptions will be measured while protecting privacy and mitigating surveillance concerns.

There's a particular format for submission of entries. That format is as follows. First, you want to make your submission through the webpage URL that's on this slide (https://nij.ojp.gov/funding/innovations-measuring-community-perceptions-challenge-submission-form). I also want to note that the challenge closes on July 31, 2023 at 11:59 PM Eastern Time. Can I just give you a word of advice? Don't wait until the last minute. We have an email address on the challenge website if you have technical difficulties, but you don't want to leave that to the last minute.

Each entry should have three components. There should be a team roster, a narrative, and an appendix. Let me talk about each one of those in turn. The team roster should be included for everyone, even if you're an individual entrant. If you have more than one team member, it's important that you include each person's name, a proportion of the prize that they would receive if you won and then have them sign next to that information. You’ll want to include a narrative. That narrative is limited to 12 pages. Yes, it is 12-point font and double spaced. Please make sure that you're within those 12 pages. And then there's an appendix. If you cite some references, then you put your bibliography in the appendix. And we also want to see your detailed budget for what your method will cost. Remember, the appendix does not count against the 12 pages, but it also can't contain anything but the bibliography and the budget.

Let's move now to eligibility. I want to start by saying that NIJ truly is casting a wide net with this prize challenge. Entries from disciplines outside of criminal justice or involving interdisciplinary teams are strongly encouraged. NIJ welcomes entries from practitioners, from researchers, from public and private entities, research laboratories, startup companies, you name it. Now, the excluding factor is that individuals, teams of individuals, corporations or other legal entities must either reside or be domiciled in the United States — the 50 United States, the District of Columbia, Puerto Rico, the U.S. Virgin Islands, American Samoa, and Guam. Interestingly, if you're an individual, you have to be at least 13 years of age to apply. Okay. So, we made it through the description of the challenge. Now comes the part you've been waiting for, questions and answers. Please put your questions and answers into the Q&A box. We'll have that monitored. And then someone will read them out to me.

JOEL HUNT: The first one, Liz, you've already started to answer. “Are you interested in studies that leverage social media data such as Twitter data?”

ELIZABETH GROFF: Yes, we are. That's one of the many things that we're interested in. And Nancy jump in. I see you nodding vigorously.

NANCY LA VIGNE: That's me agreeing.

JOEL HUNT: “What do you consider cost-effective? For weekly insights, what is rough cost range that would be considered cost-effective?”

JOEL HUNT: Not to speak out of turn, but I think it really also depends on other aspects of your proposal such as what geography level, what frequency, things like that. So I think it's sort of difficult to provide just a blanket answer to this.

NANCY LA VIGNE: Yeah. I agree. And I would add, we are asking for a per capita cost, the denominator being the number of people who are represented and whatever the measurement is. To some degree, it's going to be relative based on the nature of the submissions we receive. But what we overall are seeking is as close to inexpensive means as possible to collect this data so that it can be collected repeatedly, like every quarter, or every six months at least. So that we can discern changes over time as a result to changes in, say, police community engagement strategies or changes in policies, or even changes in events that hit the national news that have nothing to do with the jurisdiction but still influence local perceptions of police rightly or wrongly. So it's really more of a relative thing that we can't really answer because we don't know what we'll get. But we do want to do something that can be affordable for jurisdiction to do.

ELIZABETH GROFF: I can't add anything more. You two did a wonderful job.

JOEL HUNT: So sticking on the topic of cost, “Should the cost mentioned be based on what I would charge for such services which are described in my entry? If not, what sources should the cost be based on?”

ELIZABETH GROFF: The cost really is about how much it would cost a municipality or a jurisdiction to undertake this type of survey, or this type of data method. Our goal with this isn't to have a new data collection by the federal government. Our goal is to, I think, please correct me if I'm wrong, Director, but our goal is to empower jurisdictions to fund their own collections of community perception.

NANCY LA VIGNE: Right.  I agree. I guess it depends on who you are and what you bring to the work, right? So if the question comes from a would-be vendor, then it would be whatever costs you would typically charge. If it's a researcher that wants to generate data for research purposes, maybe you wouldn't charge anything from a profit perspective, but there's still cost involved in actually doing the work.

JOEL HUNT: “Does the corporation need everyone in the corporation listed on the team roster?” And again, not to speak out of turn, I think the short answer is, it needs to list everyone who is part of the team doing the submission, who would be eligible to receive a portion of any of the prize money won. So if you have a thousand-person company, but only five people are participating in the submission, you only need those five individuals' names, what percentage of the prize money, and their signature.


JOEL HUNT: “If your proposal relies upon partnerships, do we need to have all partnerships in place when we apply? We need to leverage partners for survey dissemination.”

NANCY LA VIGNE: That question makes me concerned that there's a misunderstanding about the nature of the ask. We are not seeking proposals to do the work. We're actually inviting people to share with us how you would do the work , and your assessment of how representative and accurate it would be, as well as the cost. To the extent that you can draw on previous work to justify your approach, you will strengthen the entry. You would not have to have your team in place in order to submit the work. Feel free to pose a follow-up question of that, just to make things even more clear.

ELIZABETH GROFF: Thank you for saying that. I wanted to say that earlier. Follow-up questions are welcome.

JOEL HUNT: “Could you please clarify if students at a US university on F-1 Visas that are eligible.” I'm not an expert in federal definitions. And again, not speak out of turn, but usually the easiest language I've been able to use is, if you pay taxes on any money you would receive, you are eligible. It's usually the easiest definition we've come up with. I'm not an expert on every single type of visa, but if your visa requires you to pay taxes in the US, that is generally how we make that decision.

ELIZABETH GROFF: There's also a footnote in there that takes you to actual categories of people, and that may be helpful.

JOEL HUNT: “Can a single team do entries for Category 1 survey instrument and Category 2 non-survey?”


JOEL HUNT: Yes. In theory, a team can be part of eight different submissions, correct? Survey, non-survey, overall Category 2, and then each of the five individual categories in Category Two?


JOEL HUNT: So up to eight entries potentially per team.

ELIZABETH GROFF: That's right.

JOEL HUNT: So we did get a follow-up question to your response, Nancy. "Do you want us to conduct a study and write an executive review summary before July 31st?" is now the question.

ELIZABETH GROFF: Well, so, I guess there's two different ways that you could come at this. You could have already done the work somewhere else, and be proposing this idea then to NIJ. And in that case, you should have some estimates of what it would cost in some fictional jurisdiction that has 100,000 people on it per capita, right? Or you could have some great idea about an approach and you could estimate the cost is how I thought about it. This competition is  truly a ‘great idea’ competition.  You have to demonstrate that you've thought through the specifics of your great idea, otherwise, it won't be a convincing entry. But you don't have to actually have done the work in the past.

JOEL HUNT:  “Are cross-cultural study sites accepted in the proposal, for example, can document study in multiple countries including the United States?”

ELIZABETH GROFF: This is about how you would conduct the study. Not about where. The idea is what's your best idea for how to conduct a representative either a survey or some data analysis that you can come up with a plausible way of saying how representative that method is. It is not about variety in the types of communities per se.

JOEL HUNT: I often think of this challenge as we’re we're thinking of a generic jurisdiction. And we want people to base this on prior knowledge, prior research that was done potentially, but how would they now do proper perception work within this new generic jurisdiction? How would we properly measure those five different constructs that are provided? Is that fair?


JOEL HUNT: “What I understand is our design can include elements of both Category 1 and 2. A survey element and usage of available data. If yes, should we select one category and apply to that category? Could you clarify?”

NANCY LA VIGNE: Well, we were just talking about this the other day, because it was a late-in-the-game revelation to ourselves that someone might want to do a hybrid approach. And so we determined that you should choose what category you would like to be considered under if you choose a hybrid approach.

ELIZABETH GROFF: So one way you might do that is if the majority of your approach is about data, then put it in the data category.

NANCY LA VIGNE: Right. If it's mostly survey with a supplement, then you put it under survey, or maybe you want to be strategic and you think that there's more competition under one than the other. I can't tell you which it would be. But that's up to you to decide.

JOEL HUNT: Someone posted more of a statement looking for the accuracy of it. “It seems the prize money goes to the team presenting the idea, but there are no funds for executing the proposed project.”

ELIZABETH GROFF: If you look at the website, there's a statement there that pending additional funding,  we would love to invite prize winners to pilot their prize-winning submissions.

NANCY LA VIGNE: We would be interested in taking some share of the prize winners and supporting them through a piloting phase that would be funded separately.

JOEL HUNT: That actually goes towards the next one of, “Will there be additional funds to carry out the study?” And I think you both just answered that. The next one is, they're quoting from the challenge that says, “’Solutions that solely rely upon law enforcement contact surveys will not be considered.’ Does this eliminate using law enforcement data such as cell phone numbers via incident reports as a potential source for a phone contact?”

NANCY LA VIGNE: So we said "solely," which is to say we wanted to make sure that we weren't getting measures that only look at people who had their own experiences with police, because we know that that's probably the easiest way to represent community perceptions. But we also are aware from the literature that people develop perceptions both from their own experiences as well as what they hear from their family and friends and what they witnessed in the course of their day-to-day activities. So that's why we have the word "solely" in there. That does not eliminate law enforcement sources of data or respondents, but we don't want it to be solely about people who have had interactions with law enforcement.

NANCY LA VIGNE: I saw something around decolonization, where was that? “Traditionally, scholars and practitioners of color aren't prioritized through funding in a way that decolonizes perceptions of law enforcement.” That's one of the reasons we're doing this challenge is because we don't feel that a lot of people, particularly people of color, are represented through traditional survey methodologies, particularly those that are led by law enforcement. Because there tends to be community satisfaction surveys conducted through convenience samples where people go online and fill out this survey or send back this paper and pencil survey through the mail. We know those tend to over-represent White people, people who live in safer communities, and people who have more favorable experiences with and perceptions of police. So that's one of the things we're trying to get at, is to uplift other perspectives. The degree to which your team includes scholars and practitioners of color is of course up to you. But I think a case could be made that that would be more likely to generate higher response rates from the populations of interest.

JOEL HUNT: “If a company is based in a foreign country but has an office in the US, are they eligible and do all team members in that company need to live in the US?”

JOEL HUNT: Yes. If there is a US office, they are allowed to compete through the US office. What individuals are allowed on that team, again goes back to a similar answer as before. If you pay taxes on any prize money won, you're allowed to be on that team roster. So that's really what the catch is, it tends to be those who are--that have a US-based facility and those who are in that US facility because the individuals who might be at maybe a main office in a foreign country would not be paying taxes. It again comes down to would you pay taxes on any prize money.

ELIZABETH GROFF: If a US based company submits an entry as itself, but the entry was actually created by some employees who are not based in a US territory, then the onus would be on the company to pay the taxes and distribute the prize money. The employees would not appear on the entry at all

JOEL HUNT: Yeah. We have had that requested before where the payment is made to the company. The company is then the only thing listed on the roster and all the money goes to that company and that company would then pay taxes on it. And then there would have to be an agreement between the company and its participants on how they would receive any money from them. But a company is allowed to participate in this as long as they're in the US and pay taxes. How they then would get the money to anyone who is part of that submission is between that company and their individual workers.

ELIZABETH GROFF: Great, thank you.

JOEL HUNT: “Do we need to specify what will be done with the money if we win a prize?”

ELIZABETH GROFF: No. It's your money.

JOEL HUNT: “How is the "accurate" statistically defined for the priority of accurate at microgeographies?

NANCY LA VIGNE: Representative based on the underlying demographics of that microgeography.


JOEL HUNT: Okay. “Is there a specific cost range that you recommend we try to stay within?”

ELIZABETH GROFF: The easy answer is no, there's not. I mean we're going out for a challenge because we're looking for cost-effective solutions. So that's the best kind of guidance I can give you. If it's a really big, big number, then it's probably not going to be cost-effective.

JOEL HUNT: “How is the prize money separated from the funds needed to execute the study?” I think this goes back again to a confusion that we are not expecting individuals to actually execute what they proposed to us. They are providing us or proposing to us almost a proof of concept of what their methods would be, correct?


NANCY LA VIGNE: Yes. But the more the proof of concept information that is included in the entry, the better. You will be more competitive if you can demonstrate from prior research or preliminary analyses that the strategy is, for example, representative at a microgeography. So that could be drawing from past research, drawing from other people's research, drawing from what you can demonstrate you are able to discern through some kind of mining of big data, et cetera.

ELIZABETH GROFF: That's right. We want everyone to understand that this is all about winning the prize for having the best having the best idea. After the dust settles, we'll look at what we can do in the future.

JOEL HUNT: “Is a challenge focused on a community perception of the police around a particular type of crime acceptable?”

ELIZABETH GROFF: In order to have our entries be as comparable as possible to one another we are asking everyone to use the same five constructs. There are five survey questions. If you're thinking about proposing a survey, we’d like you to use those questions e. Please follow up, as Nancy said, with another question if that doesn't clarify your question.

JOEL HUNT: “Survey proposal follow-up. If we have already conducted similar work, do you want that data submitted or just proposed and described in the narratives?”

ELIZABETH GROFF: Have you published that? If it's something you could cite, then you should cite it and use it as the basis for your arguments in this entry.

JOEL HUNT: “The five constructs mention that we must use the NIJ survey questions shown on the website. However, my methods move away from asking subjective questions on surveys such as the ones you have listed. Is there really a mandatory requirement? I will still have survey questions, but they would be much less subjective.”

ELIZABETH GROFF: Providing the questions was done to focus entrants’ attention on the methods rather than on the question construction and design of the survey instrument. But if you really cannot propose your method without different question, then you have to decide what to do.

NANCY LA VIGNE: Certainly, the questions we provided represent certain constructs that, if you have different questions, you should still measure those same constructs to be at all competitive.

JOEL HUNT: “Is it correct that proof of concept or prototype implementation is expected and not just a description of the proposed methodology and expected outcomes?”

ELIZABETH GROFF: No. We are simply saying that it strengthens your proposal to have evidence supporting the approach you are proposing. If you do not have evidence, then provide coherent, plausible, theoretically based arguments in support of the method you are proposing.

NANCY LA VIGNE: But you will be more competitive if you have some type of prototype implementation. Any way that you can demonstrate that your strategy is a good one will be helpful.

JOEL HUNT: "The cost issue may be confusing people. For example, cost for us is the same whether it is US national, state, city, group in a city, et cetera. The ‘per capita’ is not germane for some solutions."

ELIZABETH GROFF: If the cost is the same regardless of geographic level, then simply provide that single cost per time period.

JOEL HUNT: “Is the budget expected to be in a specific format?”


NANCY LA VIGNE: You're not submitting a budget as if it were an application, right? You're just saying this is how much it will cost based on our experience of how much we think it will cost. It's not a proposal budget.


JOEL HUNT: “So, to further drill into that, are there expectations that they break down the costs in terms of how much money they would have to spend on technology or how much they would have to spend on maybe simple supplies or personnel, or things like that? Do you want it broken down into different cost categories?”

ELIZABETH GROFF: Yes. That fit your proposed idea, right? And that's why we didn't want to specify any kind of formatting because every entry might be different.

Specify the costs that would be incurred to actually carry out the survey administration or develop measures from existing data.

NANCY LA VIGNE: Can I just add something? Because I'm putting on my perspective grantee hat that I've worn for many years before coming back to NIJ. We obviously want your best estimate, but this should not signal anything around any future opportunity to apply for some pilot in the future, right? What I don't want is for people to think that the cost is too high or strategically underestimate costs or anything like that. But you also will not be completely locked into whatever that cost is if you end up being a prize winner and you're invited to submit a proposal. That kind of is a reset where there would be more opportunity to think through the methods. Does that sound okay with you, Liz and Joel?

ELIZABETH GROFF: Exactly. Any post-Challenge solicitation would be competitive. And then at that point, you'd be talking about your own budget, but not here.

JOEL HUNT: We're going back to the questions again. “The five questions listed on the challenge website, are those the five questions to be used in the survey? Are we to extrapolate a separate set of questions predicated on the list? As in these are constructs that we have listed on the website, are we thinking of any specific questions that would be used on the survey?”

ELIZABETH GROFF: Yes, those are the five questions we’d like each entry to use on their survey. We would like you to just use those questions because we don't want this to turn into a competition about who can produce the best questions to measure each construct. This is about how to actually undertake a survey.

JOEL HUNT: I think that somewhat answers the next one of these questions, “Are the ones we use, have they been tested for validity or reliability?” And again, that's not the focus of this challenge, the specific questions. The next one, “Is there a page limit? The main entry is 12 pages, but the appendix does not have a page limit. However, Liz did go over what is allowed and not allowed to be inside of it. “Given that so much of what we've known today about communities comes from over a century of ethnographic studies, is there any place in this competition for ethnographic or any qualitative research?”

ELIZABETH GROFF: I think that that's up to you. We want innovative ideas. And if you can produce an innovative idea that integrates ethnographic research and gives a community-wide survey, then bring it on. I would love to see it. I mean that sincerely. Even though I know it doesn't always come through in my voice. This is my sincere face though.

NANCY LA VIGNE: I can vouch for that.

JOEL HUNT: “What can municipalities normally budget for this type of information?” I think they're looking maybe for some goal post on overall budget, but I don't think we necessarily have any data to provide them.

ELIZABETH GROFF: I’d suggest that entrants consider the overall goals of NIJ when you're thinking about the idea that you're proposing and the cost of it. The goals are weighted against one another. We're trying not to limit the entries that we get. I realize that you're trying to win, and I respect that, but we want your great ideas. Be assured that we understand there are trade-offs. In maximizing one of the criteria, you may increase the cost.

NANCY LA VIGNE: Yeah. Although, I will say I don't think looking at jurisdictional cost is a good measure because, in my assessment, jurisdictions have under-invested in getting at this type of information at the level we think we need to and at the degree of representativeness.

JOEL HUNT: Here's a technical question. “Does the prize money go to the individuals on the team? Can it go to the unit of the organization that the team members are consisted of?” You have the opportunity to be a team of individuals. You could be a team of individuals from an organization working on organization time, if they have approved it, and then the organization might want to receive that money and distribute it to the employees. We're not going to dictate those portions of it. If you're part of an organization, you need to talk to your organization about how best to apply and whether or not you should be doing it on company time or on your own time, things like that. But the money can go to individuals, it can go to a company, it can go to a company and then to individuals. So as long as we have the exact proportion of prize money to send each member of the team, we can then send the prize money to the bank accounts of the winners and they will pay the taxes on it. And beyond that, we don't say much about what's done with the money.

ELIZABETH GROFF: Excellent. Thank you.

JOEL HUNT: “To further clarify, individuals do not need to be affiliated with an organization then?” That is correct. —"So we are not required to submit a post-award report at the moment, it's just their actual submission, correct?”


JOEL HUNT: “Could you include survey questions beyond the mandatory ones?”

ELIZABETH GROFF: I think that you can, but you're not going to be graded on those. I'm sorry. I was a college professor for 15 years. You're not going to be graded on the additional questions, but you will be graded on your methods. Does that help?

NANCY LA VIGNE: And if you have any additional questions as your cost, you need to consider that and you might want to separate that out from your cost estimates associated with the requested five questions. That's what we're asking.

JOEL HUNT: “Do we get the feedback and scores from the reviewers?” So past challenges, we used an equation to score them right there. It was not subjective. It was purely a score that was based off an equation and teams were able to get those scores. This is much more like how we do solicitations.

NANCY LA VIGNE: Okay. I just want to, without locking us into anything, assert that, again, putting on my applicant hat from all those years. I firmly believe in giving as much constructive feedback as we can, recognizing that we have no idea how many submissions we will be receiving. And therefore, we can't say what degree of detail comments we can share..

JOEL HUNT: We're getting more detailed and specific questions, so I think we're getting better at describing this. “So based on the responses provided to most questions, it seems like we do not have to conduct a study but propose an idea to best measure the construct within the challenge. As such, can we propose partnering with community agencies?

NANCY LA VIGNE: Yeah, I saw that and I realized we're giving a little bit of mixed signals here. Yes, you can propose any number and type of partners you wish. You don't need to have letters of support. This is conceptual. But again, any way that you can make the case for why you'd be partnering with certain people, what perspectives they represent, what ability they have to represent respondents or reach respondents, you know, that's all going to be an important part of how you describe your proposed approach.

JOEL HUNT: Since we're almost out of time, I'm going to do the next two very quickly. “Is what is or is not allowed in the appendix listed anywhere?” Yes, if you go to the challenge website, you will see a description of what's allowed in the appendix.


 “Is there any time limit by which we need to execute and finish the proposed study idea?” Again, you're not executing. What you're proposing, you are purely proposing this work.

ELIZABETH GROFF: July 31st, 11:59 PM is the due date for challenge entries.

JOEL HUNT: “If winners will be invited to execute their projects, when will they know about the psychometric properties of the survey items? The project could have quite a bit of reach.” Again, I think that is much, much further down the road. And again, it's up to the availability of funds and so many other levels of questions that I don't think we can really provide any timelines or if it will for sure happen, anything like that.

ELIZABETH GROFF: Right. You've got plenty of time to work through everything.

JOEL HUNT: The last question is, “Sample survey questions can be included in the appendices, correct?

ELIZABETH GROFF: No, don't do that. The appendix is limited to the bibliography and the budget. That's it. If you put anything else in there, we must disqualify you.

NANCY LA VIGNE: Right. And just to be clear, we're really not seeking sample survey questions. This is not a request for people to develop a survey instrument.

JOEL HUNT: “There's no requirement of CVs or bios on the team roster, correct? Is that correct or is there still the requirement of a bio part of the team roster?”

ELIZABETH GROFF: No. We took that out. It's just the names of the people on the team.

NANCY LA VIGNE: We didn't want to bias it in any way because we want lots of fresh ideas from a lot of different sources and different students and senior faculty and everything in between.

ELIZABETH GROFF: That's right. Thank you all for your engaging questions.

NANCY LA VIGNE: Yeah. Excellent questions.

ELIZABETH GROFF: Thank you again. Thank you, panelists. Thank you, everyone who attended. We will answer any questions that we didn't get to on our FAQs page. Take care.

NANCY LA VIGNE: Thank you.

STACY LEE: This will end today’s presentation.

Date Published: June 23, 2023