POST
|
I never got around to solving this problem. I forgot that I had posted here. I did a search and found...my own question already posted by me. And I can no longer see Bill's response, which impressed me at the time. My recent thinking has identified a problem with my first approach, then a second approach, also with a significant flaw, then a third approach, which is an elaboration of the first approach, but which I don't know how to execute. Bear with me. Let's keep it simple. Consider a scenario of two hospitals with non-overlapping service areas, and we want to discover the de facto catchment area for 90% of patient cases for each hospital. (Let's also not worry that some persons have multiple encounters.) Approach #1: Create a density raster. Draw isolines. Choose the one that encloses 90% of patients. There's your catchment area. Problem: The isoline creation process is not guided by percentage or number of total cases. It is guided by a Z value assigned to each raster cell that represents density of people in the cell's immediate surroundings. So, we can estimate the isoline for 20 patients per square mile but that does not tell us how many patients are within that isoline. Approach #2: Create drive time service areas at closely spaced time intervals around each clinic location using Network Analyst. For example, one service area polygon for every additional 2 minutes. Next perform some kind of spatial join to capture the number of patient points associated with polygon. Polygon covering 90% of patents is your catchment area. Problem: The service area polygon represents potential service, not actual patient origination points. A hospital located in the center of town might draw most of it's patients from communities lying to its north. However, the service area polygons are anchored around the hospital location. Thus to get to 90% might require a very large drive time service area that reaches far north while also covering large irrelevant areas to the south, east and west. This method is objective but not necessarily reflective of actual catchment area. Approach #3: Go back to Approach #1. Draw closely spaced isolines. Export them to a polygon feature class. For each polygon, capture the number of patients it overlays, compute the percentage and find the polygon that covers 90%. Does anyone see a problem with Approach #3? It seems tedious and could be computationally intensive if applied to 150 hospitals and 5 million patients across the country. Even it is sound logic I don't know how to do it efficiently.
... View more
09-09-2014
10:34 AM
|
0
|
0
|
643
|
POST
|
Bill, As always, thanks for the very thorough and thoughtful reply. This will be invaluable guidance when we get around to addressing issues that we probably should be working on. But for the moment we have a simpler goal - to objectively define the "footprint" of each hospital. Two important points of clarification: 1. Our hospitals are spaced far enough apart that they do not compete with each other for the most part, and 2. We are not concerned (for the moment) about competition from hospitals owned by other systems. I know this is an unusual situation. We are addressing a narrow and unusual question that I can't elaborate on here. The question requires us to discover the footprint of the hospital. I have called it the catchment area. That might not be the best term for it. Thanks, Mark G.
... View more
08-14-2012
03:53 AM
|
0
|
1
|
643
|
POST
|
I want to estimate the "catchment" area or service area footprint for a series of hospitals (about 125 U.S. hospitals). I have the geocoded residential addresses of patients for each hospital for a period of time, say 1 year. I don't think bounding containers such as convex or concave hulls are appropriate because every hospital has a small percentage of patients whose addresses are very far away. For example, a Nebraska tourist may be hospitalized in Boston while on vacation. The Boston container would reach to Nebraska. Not very useful. My thought is to create density rasters from patient locations. One raster for each hospital. Then for each raster estimate the contour lines that would capture 95% of the volume of patient density. I might have to adjust the cut-point higher or lower than 95%, but as long as it is consistent across hospitals that would be okay. Unfortunately I don't see an easy way to do this for 125 hospitals. For a single hospital I might be able to determine the 5th percentile cut point of raster intensity and use that in the "contour list" tool. But: 1. I am not sure that is the same as estimating 95% of patient density, and 2. It is not practical to do this manually for 125 rasters. I realize that Geospatial Modeling Environment might have what I need, but it is difficult in our IT environment to get that installed on our server. (Not impossible, but it is an option.) Suggestions?
... View more
08-07-2012
05:56 AM
|
0
|
2
|
4761
|
POST
|
For sake of simplicity consider this problem limited to the lower 48 U.S. states. We have 8.5 million geocoded client residential locations and 1,000 sites of service. For each client we have estimated drive time and drive distance to the nearest site of service. We want to identify the best places in the country to place new service locations. How should we go about it? Here are some working assumptions, though we invite your input on these as well: 1. We have no rock solid model for how attractiveness of service decays with increasing drive time or drive distance. There is an industry standard that all clients located <= 30 minutes from an existing site of service are considered to have adequate vehicular access, and all others are considered not to have adequate access. Therefore a cluster of ???unserved??? clients centered 60 minutes from an existing site is equally as interesting as one that is 240 minutes from an existing site. 2. All clients are of equal interest, save for their distance from existing sites. We do not know how many new sites can be funded. Therefore we would like to rank the identified hotspots or clusters of need. We would prefer to work with the point locations of clients rather than aggregated statistics in administrative bordered areas. However we will consider polygon analysis if the suggested unit of analysis is granular enough. We will consider statistical sampling but would need guidance on how to go about it. We are working with ArcGIS 10.x. We have Spatial Analyst, Network Analyst, Geostatistical Analyst and 3D Analyst, as well as StreetMap Premium. We are working on a 16-core server and we don't mind if processes take a few days (or more) to run. We also have SAS software at our disposal if necessary. Thanks in advance.
... View more
05-23-2011
07:52 AM
|
0
|
2
|
2312
|
POST
|
Jay, thanks for that excellent response. We were somewhat aware of the capabilities you describe. They would be particularly helpful for spatially focused areas and specific seasonal conditions. However, our task is to produce the best estimate of "average" drive time possible between 25.5 million pairs of points across the entire 50 states. (8.5 million residences and each of three different types of care location). I know of no across the board adjustments for year-around conditions or pop. density that can or should be made to improve the estimates coming from StreetMap Premium and Network Analyst. I'm looking for validation of my assumption that there are no magical adjustments, or for information about those adjustments.
... View more
03-03-2011
08:54 AM
|
0
|
0
|
192
|
POST
|
We estimate the drive time between each of 8.5 million U.S. residential locations and the nearest of any of 900+ service locations. From these estimates we calculate access statistics by planning region. We are using Network Analyst 9.3 and StreetMap Premium (not Advanced). We will soon be upgrading to 10. Our regional planners and other colleagues are concerned that the drive time estimates are often too low, i.e. not real world. They want us to make adjustments for "in traffic" conditions, thus we are exploring StreetMap Premium Advanced (and weighing its advanced pricing.) However, we are also being pressured to develop other kinds of adjustments, such as seasonal adjustments for icy roads, or "population density", or "geographic isolation". We don't entirely understand what such adjustments would entail and we are resisting them. Our current position is that we are in no position to improve on the drive times estimated by Network Analyst and StreetMap Premium. We cannot ground truth the entire U.S. That said, we would like to be as informed as possible on the matter. Does anyone know of literature or white papers where others have evaluated adjustments to drive time other than rush hour adjustments? Thanks.
... View more
02-25-2011
05:09 AM
|
0
|
2
|
2030
|
POST
|
Many 5-digit ZIP code numbers are represented by multiple polygons in the TeleAtlas StreetMap Premium sdc file named zip5.sdc. The same-numbered ZIP polygons always seem to be adjacent to each other. What's going on? I wonder if these polygons are actually ZIP+4 polygons that were not dissolved to form the true, larger 5-digit ZIP areas. BTW, the zip5.sdc file I am using is dated July 20, 2009.
... View more
04-05-2010
04:09 AM
|
0
|
1
|
528
|
Online Status |
Offline
|
Date Last Visited |
11-11-2020
02:23 AM
|