POST
|
Hi Jim, Thanks for the input. I recently shifted the way I decided to do this, though this issue was never really resolved. I basically made a cumulative viewshed using a distribution of known point locations. I wanted to compare these with a more general viewshed, or a total viewshed. I did this by generating random points throughout and making a viewshed from that. The goal was then to compare the two to see the viewshed for known sites was better (or at least statistically different) than the general "background one." I still think this is a valid approach, with some recognition of assumptions and bias. This is where I was having issue, with the total viewshed from 1000 points. But in reviewing this and comparable methods, I realized that this is not typically what is done. Most folks will create a broad cumulative viewshed using all known locations and from that VS examine the visibility of sample points against that background. The cumulative viewshed I generated, however, was based only on 10 locations and those were also the only sample I needed. So what I did was use the total viewshed based on 1000 points and then use those data to compare the visibility of the known vs the visibility of a set of fewer random points, 500. I basically extracted the raster data at the point locations for each sample and I did a K-S two sample test on their cumulative distributions. Nevertheless, I do think that the first approach is valid, comparing a regional (cumulative vs) from known sites with a regional (total) VS made from a ton of randomly distributed sites. It's a different approach, but I think in this case the data I would be comparing would not be the values, which represent visible observer locations, but rather the distribution of visible pixels between them. I think I would have to standardize the raster data since one was generated by 10 points and another by 1000, though I am somewhat confused on this (i.e., do I just divide the count column by 10 in one case and by 1000 in another). Anyway, to get to your specific questions, I used both the viewshed (not viewshed 2) tool and the visibility tool, which produced the same results for both. I also generated one by creating separate viewsheds for each known site and adding them together in the raster calculator, but that alos produced the same result (I can't easily and efficiently do that with 1000 points, however). I did use the observer tool for a separate examination of the 10 known sites, but there's a limit of 16 input points so I can't do that for the random points. Nevertheless, I am not using that for this issue. I attached two images of the viewshed rasters. The **bleep** one is from the 10 known points, and the Total one was generated from 1000 random points. I also included screenshots of the attribute tables. Ignore anything other than the value and count columns. The other columns was where I was playing around with standardizing data. In the total attribute screen shot you can see the issue I first described.
... View more
07-09-2021
09:49 AM
|
0
|
0
|
1550
|
POST
|
Just as a follow up, would I just add all the count cells that have the same value?
... View more
06-30-2021
12:06 PM
|
0
|
0
|
1600
|
POST
|
I generated a "total viewshed" of a DEM using a series of random observer points. The resulting viewshed attribute table lists the values as the number of observer observer points visible from a count of pixels with that value. So, if you click on a cell, it will show you how many observer points can see the pixel. However, the attribute table, of course, summarizes this, so every row with a value of, say, 6 is listed with the number of pixels with that value (see 400). This is fine, but I see a weird issue I can't figure out how to fix, especially when I use a lot of observer points. I see that there are multiple rows with the same count but different numbers of observer points in the value column. In other words, rather than one row in which 500 pixels can be seen from 1 observer location, it will have a series of rows in which, say, 500 pixels can be seen by 1, 501 can be seen by 1, 503 seen by 1, and so on. I assume this is happening because given the large number of points distributed across the landscape these are separate individual cells that can be seen by separate groups of observer points. So the 500 pixels that can be seen by 1 are different pixels than the 501 that can be seen by 1. My confusion is trying to understand the organization of the data to use in other applications. For example, for larger count values I can easily say something like "20,000 pixels are viewable by x number of sites" since there is only one row. But if I want to say the same thing for 1 site, I have several options. I find this is a challenge also in talking about any cumulative frequencies
... View more
06-30-2021
10:50 AM
|
0
|
4
|
1631
|
POST
|
Maybe I am misunderstanding. The attribute table groups cells with the same value. So, and you already know this, row one might have a value of 2 (whatever that is) with a cell count of, say, 60, meaning that 60 cells in the raster have a value of 2. Would not a frequency table have instead each cell listed as an individual in the table with a value of 2? This is not a huge issue since you ostensibly would not what the total number of cells are in the raster. But if you export the attribute table and treat it like a frequency table in some stats packages, it might assume the sample size is the number of rows, not the number of pixels.
... View more
06-30-2021
10:31 AM
|
0
|
0
|
988
|
POST
|
Is there a way to easy convert the attribute tables associated with a raster to a frequency table? I imagine that for most raster datasets such a table would be unwieldy
... View more
06-30-2021
10:18 AM
|
0
|
3
|
990
|
POST
|
I posted this over in Geoprocessing the other day and thought I'd seek input here I've been doing a lot of comparative research on creating viewsheds and comparing the data, not just in ArcGIS but other platforms and general theory. I keep running into some issues. Let me explain. I have 10 sites on hilltops that I want to assess intervisibility. The line of site analysis is not great in this case. So, I just calculated viewsheds for each site, converted the visible raster data to polygons and determined which sites fall into each other's viewsheds. Fairly simple, though not without some issues and simplifying assumptions. I can also, of course dependent on the size of the base DEM and other parameters, determine what the average viewshed area is for each site. However, I want to do a couple of additional things. I'd like to be able to compare the overall average viewshed or visibility of the sites with what the average visibility would be throughout the area, regardless of physical location. That is, I'd like to assess quantitatively if the hilltop locations have over all better visibility than other locations. First, I used the raster calculator and added all the site viewsheds together to create their total viewshed. This obviously wasn't necessary since both the viewshed tool and the visibility tool produce the same resulting raster data product. Nevertheless, that's what I did. To get the comparative dataset, I generated 1000 random points in the study area, focusing a tad more in the middle to reduce edge effects. In this case, I can't take the time to generate individual viewsheds for each point, so I just did the visibility analysis. As I said, I could have done that with the hilltop sites, but then I would not have been able to really assess what each hilltop sites individual viewshed areas were. Anyway, doing this produces another raster. You can clearly see that the hilltop locations are very visible in the random set, but that they have an over all lower visibility. However, making meaningful comparisons is hard. I looked at the source statistics for both rasters, and you can see that the mean for the hilltop total viewshed raster is much higher (assuming that the source statistics are recording visible pixels, which I guess they would have). But how do I compare these statistically? That's my over all question, but here are some embedded questions that keep popping up: 1) To compare these two rasters, would I need to normalize them to put them in the same scale? In this case, would I need to divide the rasters in the raster calculator by the number of sites used (10 and 1000) (or would I normalize by the number of pixels (I doubt this since they are the same in both cases). 2) Although the viewsheds show how many many pixels (Count) are visible by how many sites (value) it does not show which sites are visible and it does not show what the area of viewshed is for each site. Now, I can do this with more steps with the 10 hilltop sites, as I discussed above. Conversely I can use the Observer Points tool (or the observer function in visibility) to produce a table that lists the observer points that are visible and the number of pixels visible. I could sum up all the times each site is listed as 1 and add up the counts for them to get the differences in their viewsheds in terms of pixels. This really isn't necessary since, as I said, I can do that more easily by either recording the total visible pixels in their individual viewsheds or convert the visible portions of the individual viewsheds to polygons and calculate area. However, and here's an issue, I can't do that with the random points. Observer point tool has a 16 point limit and, as I said, calculating individual viewsheds for each 1000 points would take too long. I played around with extracting the underlying raster data and appending it to the point data files for each group, but I realized that that does not give me what I want at all, and I'm not really sure what that data is saying anyway. In otherwords, the rasters show visibility in terms of the location and number of pixels viewable by a particular point and give an average visibility. They don't show what the sizes or average sizes of the individual viewsheds are for the points. So, the rasters help address one question (average regional visibility of all hilltop sites against average regional visibility in general as approximated by a series of 1000 points) but they don't address another question (the average size of the viewshed against the average size of the viewshed of the random sample). 3) If I am stuck just comparing the hilltop total viewshed against the random point viewshed, what kind of tool can I use to test for significant differences? I can't see much in arcmap. I thought about just generating new point files for each raster, with a point for every pixel and with the raster data as the point attributes. Then, at least, I could have a data file that I could more easily calculate means, etc. and conduct some tests. But, as I said, the source statistics for the rasters are there. But how can I compare them meaningfully? Anyway, I'm sorry that this is so long. I suspect a number of other issues and problems might arise or be obvious in my post.
... View more
06-27-2021
02:01 PM
|
0
|
0
|
962
|
POST
|
Thanks! I am running 10.7. I'll download pro now from my university
... View more
06-25-2021
01:03 PM
|
0
|
0
|
735
|
POST
|
You might resample the raster to increase the pixel size and smooth it out more. I have similar problems even using 5 m LiDar. They often pick up stuff that might be interesting but are obstacles to what I want to do. These two links might help. Conversely you might see if you can download some data for your area with a lower res and rerun the viewshed. https://desktop.arcgis.com/en/arcmap/10.3/tools/spatial-analyst-toolbox/altering-the-resolution.htm https://desktop.arcgis.com/en/arcmap/10.3/tools/spatial-analyst-toolbox/how-filter-works.htm
... View more
06-25-2021
12:56 PM
|
1
|
0
|
2902
|
POST
|
Probably an issue with the high res of the raster. What is the pixel size, 1 m?
... View more
06-25-2021
12:47 PM
|
1
|
1
|
2904
|
POST
|
I've been doing a lot of comparative research on creating viewsheds and comparing the data, not just in ArcGIS but other platforms and general theory. I keep running into some issues. Let me explain. I have 10 sites on hilltops that I want to assess intervisibility. The line of site analysis is not great in this case. So, I just calculated viewsheds for each site, converted the visible raster data to polygons and determined which sites fall into each other's viewsheds. Fairly simple, though not without some issues and simplifying assumptions. I can also, of course dependent on the size of the base DEM and other parameters, determine what the average viewshed area is for each site. However, I want to do a couple of additional things. I'd like to be able to compare the overall average viewshed or visibility of the sites with what the average visibility would be throughout the area, regardless of physical location. That is, I'd like to assess quantitatively if the hilltop locations have over all better visibility than other locations. First, I used the raster calculator and added all the site viewsheds together to create their total viewshed. This obviously wasn't necessary since both the viewshed tool and the visibility tool produce the same resulting raster data product. Nevertheless, that's what I did. To get the comparative dataset, I generated 1000 random points in the study area, focusing a tad more in the middle to reduce edge effects. In this case, I can't take the time to generate individual viewsheds for each point, so I just did the visibility analysis. As I said, I could have done that with the hilltop sites, but then I would not have been able to really assess what each hilltop sites individual viewshed areas were. Anyway, doing this produces another raster. You can clearly see that the hilltop locations are very visible in the random set, but that they have an over all lower visibility. However, making meaningful comparisons is hard. I looked at the source statistics for both rasters, and you can see that the mean for the hilltop total viewshed raster is much higher (assuming that the source statistics are recording visible pixels, which I guess they would have). But how do I compare these statistically? That's my over all question, but here are some embedded questions that keep popping up: 1) To compare these two rasters, would I need to normalize them to put them in the same scale? In this case, would I need to divide the rasters in the raster calculator by the number of sites used (10 and 1000) (or would I normalize by the number of pixels (I doubt this since they are the same in both cases). 2) Although the viewsheds show how many many pixels (Count) are visible by how many sites (value) it does not show which sites are visible and it does not show what the area of viewshed is for each site. Now, I can do this with more steps with the 10 hilltop sites, as I discussed above. Conversely I can use the Observer Points tool (or the observer function in visibility) to produce a table that lists the observer points that are visible and the number of pixels visible. I could sum up all the times each site is listed as 1 and add up the counts for them to get the differences in their viewsheds in terms of pixels. This really isn't necessary since, as I said, I can do that more easily by either recording the total visible pixels in their individual viewsheds or convert the visible portions of the individual viewsheds to polygons and calculate area. However, and here's an issue, I can't do that with the random points. Observer point tool has a 16 point limit and, as I said, calculating individual viewsheds for each 1000 points would take too long. I played around with extracting the underlying raster data and appending it to the point data files for each group, but I realized that that does not give me what I want at all, and I'm not really sure what that data is saying anyway. In otherwords, the rasters show visibility in terms of the location and number of pixels viewable by a particular point and give an average visibility. They don't show what the sizes or average sizes of the individual viewsheds are for the points. So, the rasters help address one question (average regional visibility of all hilltop sites against average regional visibility in general as approximated by a series of 1000 points) but they don't address another question (the average size of the viewshed against the average size of the viewshed of the random sample). 3) If I am stuck just comparing the hilltop total viewshed against the random point viewshed, what kind of tool can I use to test for significant differences? I can't see much in arcmap. I thought about just generating new point files for each raster, with a point for every pixel and with the raster data as the point attributes. Then, at least, I could have a data file that I could more easily calculate means, etc. and conduct some tests. But, as I said, the source statistics for the rasters are there. But how can I compare them meaningfully? Anyway, I'm sorry that this is so long. I suspect a number of other issues and problems might arise or be obvious in my post.
... View more
06-25-2021
12:42 PM
|
0
|
2
|
738
|
POST
|
What ended up working was similar, though I did not need to fool with access. I created a personal geodatabase and imported the table. Once there I was able to change the data types for the z data column to double like the x and y data. Then I was able to create the feature class from xyz data no problem. I still have no clue what was wrong with the original table, though.
... View more
02-20-2016
04:44 PM
|
0
|
0
|
1684
|
POST
|
I was just using the create feature class from xy table in AC. I tried out your second option and, again, the z data do not appear (i.e., it just gives me x and y in the drop down despite the fact there is a column of z data). That said, however, all the points show up. But since I want to create a TIN, it doesn't help much if it isn't reading the elevation data as elevation data.
... View more
02-20-2016
04:27 PM
|
0
|
1
|
1684
|
POST
|
I can't recall ever having trouble creating feature classes from xy tables. Usually, I have a table in excel and I save it as either csv or txt and follow the steps in ArcCatalog. I have a bunch of x,y,z data points I recently converted from an arbitrary grid to UTM. Looks fine in some programs like Global Mapper, but I'm having a hell of a time in ArcGIS/Catalog. When it works, only about 4 points show up (when there are almost 2000). Also, most of the time arccatalog is not reading the z value and giving me this option in the dropdown. Sometimes it does and then when I try again it doesnt. I checked the properties and the z values are set either as text or long integer, where as x and y as double. I can't seem to change the data types, though. I basically just want a shapefile with points with the x, y, and z data. Any suggestions? I've tried formatting cells in excel to numbers before saving as a csv or txt with no change. However, whenever I do try to open the file as an excel worksheet rather than the csv or txt I get an error saying something like, can't connect with data base, etc. Suggestions? I've never encountered this issue before.
... View more
02-19-2016
03:03 PM
|
0
|
5
|
5885
|
POST
|
The column headings are "Datum" (which is the name I gave), "POINT_X" and "POINT_Y". Could the z data still be on the unit somewhere in a file I missed when I copied them over to the machine? Seems like a major omission that arcpad would automatically collect the x and y data but not the z data
... View more
02-19-2016
11:23 AM
|
0
|
1
|
598
|
Title | Kudos | Posted |
---|---|---|
1 | 06-25-2021 12:56 PM | |
1 | 06-25-2021 12:47 PM |
Online Status |
Offline
|
Date Last Visited |
07-16-2021
12:57 PM
|