|
Post by NCFC on Dec 12, 2012 16:56:49 GMT -7
[Question posed by Harvey Licht]
States have identified a number of distinct areas, communities and zip codes, currently identified as frontier under other designations, which do not appear in the dataset resulting from the proposed methodology. Some of these communities are defined as county subdivisions, some as Census places and some as ZCTAs in the year 2000 Census. Some of these areas and communities appear to have been combined into larger ZCTAs that are not caracterized as frontier under this methodology.
Question: What do you think could be done to increase the granularity of the proposed methodology to reduce the number of overlooked frontier communities?
|
|
|
Post by NCFC on Dec 12, 2012 17:51:57 GMT -7
[Answered by John Cromartie]
We've put out the ZIP code version of this, but given that the underlying data are based on these 1x1 kilometer grids, it’s possible to aggregate to other geographic units, and that may prove very useful to, for instance, calculate the frontier population of census designated places. So if that’s important to do, that’s easily done. We can calculate frontier status for census tracts, for places, we can even go as high as counties, and that may be a useful thing to do to compare with previous definitions and to clearly show the level of fuzziness that you get when you try to designate a frontier status for a county. Counties are typically very large and include frontier and non-frontier population. But the community level would be very useful, I think.
|
|
|
Post by NCFC on Dec 12, 2012 17:52:46 GMT -7
[Gary Hart]
I don’t disagree with that, but I just want to add that I’m thinking of this as a programmatic thing. So if I’m running a program, and I want to see if people can qualify for it, I think you can all see what the problem would be if we said, “Anybody can qualify for this program if they can find that they’re frontier on any geographic sub-unit.” I mean, you’d just keep fishing until you found…North Dakota legislative districts and found that you could actually qualify using one of those. If you see what I’m saying, there’s a difference whether [your units of aggregation] are super small or super large, you’ll find different things. Any given program probably needs to fix itself some kind of units. This methodology lends itself to being aggregated to different units, but to be able to choose any unit and compete with each other based on the ingenuity you have on picking units, I think is a problem.
|
|
|
Post by NCFC on Dec 12, 2012 17:53:33 GMT -7
[Steve Hirsch, ORHP]
Different methodologies are going to produce different results, that shouldn’t be surprising, I think. We’re not going to exactly match up with different methodologies using this one.
|
|