Hi all,
I have checked the API reference and the Concepts reference for the Community Analyst API, but I can't see anything that describes the geoprocessing routine that generates the reports. In order to use the API, I need to know what assumptions are being made. For example, in this sample:
http://help.arcgis.com/en/communityanalyst/apis/flex/samples/index.html?sample=SummaryReports
are the values merely a summary of the census geographies that fall under the selected polygon, or is a clip taking place? If the latter, is there an assumption that population is regular across the entire census polygon, or is population apportioned based on other variables, e.g., land use.
I suspect I'm just missing a key reference that documents how BA works - can someone please point me in the right direction???
TIA,
Julie
Tony,
Thanks for the response! It sounds like you have a defensible approach, but I still would like to know a bit more. Every analytical result has caveats and pitfalls, and it's difficult to really know what decisions can be made based on data that isn't qualified by a clear understanding of the algorithm that produced it. I know that dasymetric mapping comes in many flavors, especially depending on the inputs you select for your interpolation. Do you have a whitepaper that summarizes the techniques you're using? Perhaps, at least some peer reviewed articles referencing the key components of your algorithm(s)?
Thanks again!
Julie
Julie,
You find the below link to the Business Analyst 10.1 Desktop online Help useful as it details how the software summarizes data. There are several methods that allow to decide how data is summarized spatially. I believe BAO uses a method which is like the Hybrid that is detailed but I find the section on block apportionment vs. cascading to me the most informative.
http://resources.arcgis.com/en/help/main/10.1/#/Data_tab/000z000000v7000000/
Regards,
Jason R.
I am writing a fact sheet to make folks at my university aware of the availability of Community Analyst. I would like to be able to give them a definitive answer as to how data is apportioned for non-standard areas (polygons). The answers in this thread point to methods outlined for Business Analyst Desktop (block apportionment, cascading centroid, and hybrid) in ArcGIS Destop 10.x Help. However, I do not see a definitive answer in this thread as to whether these are the same exact methods used in Community Analyst. Can anyone suggest where I could get that answer? Thanks
Thanks much Jason. Very helpful.
I have another question. The following Esri web page seems to indicate that Community Analyst Standard Plus currently provides access to 12,764 variables: http://www.esri.com/software/arcgis/community-analyst/variables.
Is that right?
Also, when I sum up the quantities that appear alongside the names of each dataset (or data category) on the left of the page, the total is 30,595. If these quantities indicate the number of variables in each dataset (or data category), then am I right in assuming that some variables are being "double counted"? I.e., whoever created the list decided that particular variables could be classified in various data categories?
Thanks