|
POST
|
I don't believe there's any GIS system which directly implements Plurigaussian simulation, most of the models I've seen are written for numerical computing focused environments such as MATLAB or Fortran. The necessary primitive mathematical functions to perform are to my knowledge all included in Geostatistical Analyst, so I imagine it'd be reasonable to make this extension, but I'm unaware of an existing implementation.
... View more
12-09-2013
10:06 AM
|
0
|
0
|
561
|
|
POST
|
Check out this existing thread for some initial ideas on how you might go about this. Another option is to sample your point at regular small intervals, and then do a Near analysis on the two sets of points, and average the distances between the paired point samples.
... View more
12-07-2013
09:00 PM
|
0
|
0
|
493
|
|
POST
|
Try saving the output to another location, or even another drive to see if the same behavior persists. I've seen something like this before, and it turned out to be a permissions issue, though it was with a profile directory, and not C:/temp. If it works in another GDB in a different location, then it's worth seeing if we can create a reproducible issue. Here's the more detailed description of the error: Description The output raster dataset could not be created in the specified format. There may already exist an output raster with the same name and format. Certain raster formats have limitations on the range of values that are supported. For example, the GIF format only supports a value range of 0 to 255, which would be a problem if the output raster would have a range of -10 to 365. Solution Check that a raster with the same name and format does not already exist in the output location. Also, check the Help for the technical specifications of raster dataset formats to make sure that the expected range of values in the output is compatible with the specified format. So, I'd try saving to a different location, and failing that, try writing the output out to another driver -- like a GeoTIFF file. If that works, you should be able to then copy the raster into the FGDB.
... View more
12-05-2013
11:35 AM
|
0
|
2
|
5323
|
|
POST
|
If you change the output type to string, you'll see the leading zeros. But as msayler mentioned, if you store things as dates, they will be displayed based on locale and not in the specified format explicitly.
... View more
12-05-2013
11:08 AM
|
0
|
0
|
1686
|
|
POST
|
Try something like this: input_field = "Date_Time" expression = 'formatDate(!{input_field}!)'.format(input_field=input_field) code_block = """ import dateutil.parser def formatDate(input_date): parsed_date = dateutil.parser.parse(input_date) return parsed_date.strftime("%m/%d/%Y")""" arcpy.AddField_management(input_featureclass, "formatted_output_date", 'DATE') arcpy.CalculateField_management(input_featureclass, field_name, expression, "PYTHON_9.3", code_block) strftime's %m and %d will produce values with leading zeros where needed, and the dateutil parser will work for dates formatted in other ways as well. cheers, Shaun
... View more
12-03-2013
10:00 AM
|
0
|
0
|
1686
|
|
POST
|
Jimeno, I'm not sure of the specifics for the points solar radiation tool, but testing it locally, it looks to take about one second per site per day on my machine, which gives me a ballpark estimation of 41 hours for 400 sites, using the defaults for the tool, against a 1m LIDAR dataset, stored as a mosaic. You mentioned that you're using 8 cores, do you have custom code to distribute the workload across those cores? If not, check out multiprocessing along with background processing to take advantage of those additional cores. Use an in-memory workspace to store the results. If memory is available, also try copying your input raster into the in-memory workspace. Use a file geodatabase instead of plain old shapefiles for the outputs. Try adjusting the resolution of the input DEM to see its effects on performance. Also, think about clipping down the DEM as much as possible to just include a buffered area around the points themselves. If the features don't fit into an in-memory workspace, serve them off of a fast disk -- SSD if at all possible. Those are a few things off the top of my head.
... View more
12-02-2013
03:33 PM
|
0
|
0
|
1090
|
|
POST
|
Generally, it'll probably be worth figuring out a way to display multiple search results to your users, unless you do have a relatively limited set of terms, and can come up enumerate the alternatives. One way of handling the ambiguous input from users is to use fuzzy string matching. It's related to the regular expressions Stacy mentioned, but is specifically for the comparison of inexact terms. There are a number of algorithms for fuzzy matching, many of them built in to the database, such as SOUNDEX and Double Metaphone. If you're using SQL Server, you might want to look into Full Text Indexing, which can be used to generate a fuzzy search automatically. By doing it in the database, you'll save the work of having to do as much custom Python code to interact with the results, you can just use the sorted database results instead. If you want to stick to doing things in Python, one way is to build up a list of all of your search terms, and then do the double metaphone or other similarity search against the user's input. One library I've used in the past that is handy is jellyfish, which contains a number of different string comparison engines. cheers, Shaun
... View more
12-02-2013
03:07 PM
|
0
|
0
|
2865
|
|
POST
|
Hi Matthew, Perhaps the simplest approach is to convert the polygon layer into a 'count of overlaps' vector layer, and then rasterize that. There's a tool you can download to do it in one step, or follow the steps in this GIS.SE question to perform the same analysis. From there, you should be able to do a simple Polygon to Raster conversion. If you wanted to include polylines, just buffer them by a small distance to convert them to polygons prior to the 'count overlaps' step. There are other approaches, like generating a fishnet and doing an intersect on your polygon data, but I think the above approach is simplest unless your data is large and complex. cheers, Shaun
... View more
11-21-2013
07:34 PM
|
0
|
0
|
589
|
|
POST
|
Anthony, There are a few ways to solve your problem. One is to create a model builder workflow which uses the raster calculator tool with the particular math you're using to convert between the images and reflectance. Save this model, and then from the catalog pane, you can right click on the model and select 'Batch', which will allow you to run the calculation on multiple rasters. An alternative approach if you've done any Python programming is to use the arcpy raster functions to do the work directly. This way, you could, say have a file which listed each folder with your rasters, then iterate over the list, and perform the calculation directly in Python, which might look something like:
import arcpy
band_1 = arcpy.sa.Raster('c:/path/to/band1.tif')
band_3 = arcpy.sa.Raster('c:/path/to/band3.tif')
sum_reflectance = (band_1 + band_3)/(band_1.maximum + band_3.maximum)
cheers, Shaun
... View more
11-21-2013
06:35 PM
|
0
|
0
|
612
|
|
POST
|
Todd, I think your results line up as expected. The two models differ in how they handle diffuse radiation, as described in the 1999 Fu and Rich paper: In a uniform diffuse model, sometimes referred to as a "uniform overcast sky (UOC)", incoming diffuse radiation is the same from all sky directions. In a standard overcast (SOC) diffuse model, diffuse radiation flux varies with zenith angle. So, in the uniform model, zenith angle has no role, so valleys will receive similar diffuse radiation to peaks, because as long as the sky map isn't blocked, either case will get the same diffuse radiation. Perhaps the best way to build up an intuition for this is to play with the formulae provided in the Solar Analyst user guide, pages 11-12, to see how the results differ for individual sky sectors. cheers, Shaun
... View more
11-21-2013
06:17 PM
|
0
|
0
|
740
|
|
POST
|
Right, AlterAliasName is new at 10.1 but is the right way of doing this for 10.1+.
... View more
11-21-2013
10:46 AM
|
0
|
0
|
2379
|
|
POST
|
Hello Jimeno, I'm not sure exactly what you're looking for, but you should be able to use the solar functions, perhaps in conjunction with other packages, to answer your question. I see that as you mention, when you run Points Solar Radiation on a single day, you get back the time steps but not the actual time that it corresponds to -- one way to get back the corresponding actual hours would be to use a script such as this one calculating sunrise and sunset or use a package like PyEphem to get more precise calculations. From there, you should be able to combine the solar observations from ArcGIS with 'actual time', so you can correspond the output T0 with the sunrise time and the final Tn with the sunset time. So my psuedo-code for this on a single day would be something like:
import arcpy
import ephem
julian_day = 165
# run solar calculation for all sites on this day; defaults used here
arcpy.PointsSolarRadiation("elevation", "my_sites",
"output_global_radiation", "", 35, 200,
arcpy.sa.TimeWithinDay(julian_day, 0, 24))
# figure out sunrise and sunset. if the points are close together, do this
# once for all sites, if they're geographically spread over a large area, you'll need
# to make these calculations for each site.
obs = ephem.Observer()
obs.lat = ' 34' # obtained from my_sites
obs.long = '-120.0' # obtained from my_sites
obs.elev = 15 # extract from elevation raster at site location
sunrise = obs.previous_rising(ephem.Sun()) # sunrise
noon = obs.next_transit(ephem.Sun(), start=sunrise) # solar noon
sunset = obs.next_setting(ephem.Sun()) # sunset
# then, map those times to the T0...Tn time steps, which are by default in half-hour intervals.
# output this result to a CSV or the like.
Does that help get you started? I think something like this is easier than trying to dig up the internals to figure out the corresponding times. cheers, Shaun
... View more
11-20-2013
10:13 AM
|
0
|
0
|
1090
|
|
POST
|
Austin, I can confirm this is fixed in 10.2 SP1 (10.2.1): http://support.esri.com/en/bugs/nimbus/TklNMDkzMjA0 cheers, Shaun
... View more
09-16-2013
09:24 PM
|
0
|
0
|
1112
|
|
POST
|
At least at 10.1SP1, this issue is resolved, and you can safely use "DELAUNAY" or "CONSTRAINED_DELAUNAY" as the keywords, and any other value will return an error.
... View more
08-23-2013
09:20 AM
|
0
|
0
|
1663
|
|
POST
|
Python 2.7.3 is now confirmed for 10.2, and both and xlrd and xlwt are included in the base install.
... View more
07-12-2013
04:00 PM
|
1
|
0
|
614
|
| Title | Kudos | Posted |
|---|---|---|
| 1 | 11-24-2025 09:10 PM | |
| 1 | 08-12-2022 10:14 PM | |
| 1 | 05-05-2025 10:56 AM | |
| 1 | 04-04-2025 09:03 PM | |
| 1 | 02-09-2023 10:10 PM |
| Online Status |
Offline
|
| Date Last Visited |
12-19-2025
01:14 PM
|