POST
|
We would appreciate it if you do not use the ESRI community site to steer our customers away from our integrated partner solution for your own private business agenda. This is unprofessional behavior.
... View more
07-16-2018
09:10 AM
|
0
|
0
|
695
|
POST
|
One additional thought - intuitively you always want to use data corrected to surface reflectance, this makes sense. However, because Tasseled Cap is a special implementation of Principle Components Analysis (PCA), the first and second components (Brightness and Greenness) account for the most variability in the image, the 3rd component "wetness" is minor, and the 4th component contains most of the noise, i.e., haze, clouds, and other atmospheric and environmental affects that contribute to noise. So don't use the 4th component, or use it as an indicator of noise (amount and spatial distribution). Thus, atmospherics have less of an impact on TC analysis than when using other types of analysis. When using Landsat 8, WorldView 3-4 and other sensors with more than 4 bands, the lessor components (components 4-8) containing noise are even more pronounced.
... View more
01-12-2018
02:57 PM
|
1
|
0
|
1112
|
POST
|
Thanks for the good question, and subsequent input. The Tasseled Cap coefficients are derived from uncorrected imagery, therefore uncorrected imagery is required to obtain valid TC results. The coefficients are normalized to Top of Atmosphere, so some correction is applied inherently; for more details, see: Lance D. Yarbrough, Kumar Navulur & Rachana Ravi (2014). Presentation of the Kauth–Thomas transform for WorldView-2 reflectance data, Remote Sensing Letters, 5:2, 131-138. We have asked DigitalGlobe for information regarding their plans to update TC coefficients for Surface Reflectance products, and will report back when we receive a disposition from them.
... View more
01-12-2018
10:23 AM
|
1
|
1
|
1112
|
POST
|
You need to add an average ground elevation. Your lowest value is 5.7m, so you could input 5.5m or perhaps 5m. When I did my test I input 1m in case your drone was not sitting on the ground.
... View more
07-25-2017
02:55 AM
|
0
|
3
|
853
|
POST
|
Hi Aleksandra, Many Thanks for your video data. I successfully multiplexed it using the csv metadata file (try_all2_proper_headings.csv) provided by Cody, in which he typed the proper MISB heading field names. It looks good and the video footprints on the map look plausible (I used the Esri Imagery Basemap). One tiny detail which may have affected you - you also need to close the metadata CSV file before you run the multiplexer tool. So after making sure the UTC timestamp column format is "Number" with 0 decimal places, save and close the file. Otherwise, you did a good job on ensuring the UTC timestamp is in the correct format, and all the 11 required MISB-compliant metadata looks good. I suspect you did all the "tricky" steps in preparing the metadata CSV just fine, but got hung up on dumb Excel procedural steps that are not intuitive. They are not really "Esri issues" but rather Microsoft "Excell issues", but since these affect the workflow these need to be clearly spelled out in the help documentation, which I will pass on to the documentation team. All the Best! Jeff
... View more
07-24-2017
03:54 PM
|
1
|
6
|
853
|
POST
|
I assume you mapped Yaw as the Platform Heading. The values in your CSV look plausible, except the Sensor Relative Elevation angle of -10 is very shallow and the footprint of the video frame may have difficulty intersecting the ground. But the video should still process, although it may be missing some video footprints per the reason above. To eliminate one more variable, please rename your field headings in the CSV to the heading names specified in the field mapping template, and don't include the template as an input into the GP tool. Can you attach your video? We will try to multiplex it using this latest CSV. Thanks. Jeff
... View more
07-21-2017
12:34 PM
|
0
|
8
|
853
|
POST
|
Hi Aleksandra, Thanks for the csv file - it is helpful. I see that the deliminators are semicolons, please convert these to commas. Replying to your numbered items: 1) Looks like the MP4 is likely fine 2) Doesn't like those semicolons 3) OK, I assume this app works and extracts the relevant metadata. 4) The format looks good, but it may not be properly synced with your video frames if you selectively chose the metadata at 1 second intervals. We'll have to see your resulting multiplexed video file. 5) Yes, this is a pain, but you need to specifically format the UNIX timestamp column every time you open excel. It is a case of MS being too helpful. 6) The Phantom is recording RELATIVE height, whereas you need ABSOLUTE height. So determine the true elevation value at your launch site (you can use the Esri Elevation service), then add the Phantom relative height to the actual elevation value. Just add another column to your spreadsheet, enter the true elevation (say 500m) plus the Phantom height for all the cells (500 + 0 for the launch, 500 + 10 for the Phantom relative height of 10m, etc). And add the field heading name of "Sensor True Altitude" to this new column and remove it from your Phantom altitude column, of course. And so when calculating the 4 corner data in the Video Multiplexer tool, the average elevation to enter is 500 in this example. Let us know how it works out for you. Cheers, Jeff
... View more
07-20-2017
04:12 PM
|
0
|
11
|
853
|
POST
|
Hello Aleksandra, I'm curious how you got the proper metadata extracted from your Phantom 4 and into a CSV. An app or utility provided by ? A couple of quick things to check is whether your UNIX time stamp is in milliseconds - a 16-digit number. And if you open your CSV file in Excel, be sure to specifically format the time stamp column as a number before saving the CSV, because Excel automatically converts the number to scientific notation even if you don't change anything in the CSV file. And make sure you check the "Calculate Four Corner Data" box and assign an average ground elevation in the Multiplex Video GP tool. Best, Jeff
... View more
07-19-2017
12:18 PM
|
0
|
13
|
1338
|
BLOG
|
Raster analytics using ArcGIS Enterprise is a flexible raster processing, storage, and sharing system that employs distributed computing and storage technology. Use raster analytics to apply the rich set of raster processing tools and functions offered in ArcGIS, build your own custom functions and tools, or combine multiple tools and functions into raster processing chains to execute your custom algorithms on large collections of raster data. Source data and processed results are stored, published and shared across your enterprise accordingly. This extensive capability can be further expanded by leveraging cloud computing capabilities and resources. The net result: image processing and analysis jobs that used to take days or weeks can now be done in minutes or hours, and jobs that were impossibly large or too daunting are now within easy reach. What can raster analytics do? By leveraging ArcGIS Enterprise, raster analytics enables you to: Quickly process massive imagery or raster datasets in a scalable environment Execute advanced, customized raster analysis Share results with individuals, departments, and organizations within or outside your enterprise Raster analytics is ArcGIS Image Server configured for raster analysis in a processing and storage environment that maximizes processing speed and efficiency. Built-in tools and functions cover preprocessing, orthorectification and mosaicking, remote sensing analysis, and an extensive range of math and trigonometry operators; your custom functions can extend the platform’s analytical capabilities even further. Fully utilize your existing ArcGIS Image Server on-site, or exploit the elastic processing and storage capacity of cloud computing and storage platforms such as Amazon Web Services and Microsoft Azure to dynamically increase or reduce your capacity depending on the size and urgency of your projects. The scalable environment of raster analytics empowers you to implement computationally intensive image processing that used to be out of reach or cost-prohibitive. This implementation saves you time, money, and resources. Raster analytics is also designed to streamline and simplify collaboration and sharing. Users across your enterprise can contribute data, processing models, and expertise to your imagery project, and share results with individuals, departments, and organizations in your enterprise. Finally, Raster analytics using ArcGIS Enterprise integrates your image processing and analysis with the world’s leading GIS platform, and allows users to seamlessly draw on the world’s largest collection of online digital maps and imagery. How does raster analytics work? ArcGIS Image Server configured for the role of raster analytics provides software and user interfaces to organize and manage your processing, storage, and sharing of raster and feature data, maps, and other geographic information on a variety of devices. This integrated system manages the dissemination of processing and storage of results (1) on-premises and behind the firewall for classified deployments, (2) in cloud processing and storage environments, or (3) a combination of both environments. The foundation of raster analytics is ArcGIS Enterprise, which includes an Enterprise GIS Portal, ArcGIS Data Store, Image Server configured for raster analytics, raster data store and ArcGIS Web Adaptor. ArcGIS Enterprise integrates the components of the raster analytics system to support scalable, real-world workflows. Scale your powerful processing and storage capabilities by deploying ArcGIS Enterprise in the cloud via Microsoft Azure or Amazon Web Services (AWS). For example, you can automatically scale capacity up and down according to conditions you define, or automatically dispense application traffic across multiple instances for better performance. ArcGIS Enterprise makes deployment easier by providing Cloud Builder for Microsoft Azure or AWS CloudFormation with sample templates to configure and deploy your system in the cloud. Develop, test and optimize your raster processing chains using Esri’s rich set of more than 200 functions and tools in the familiar ArcGIS Desktop or web map viewer. Once verified and optimized in the dynamic on-the-fly processing environment, submit your processing chain to ArcGIS Portal, which manages the distribution of processing, storage, and publication of results. The ideal deployment of raster analytics is comprised of three server sites to perform the primary roles of the portal host server, raster analysis server, and the image hosting server. Two licenses are required for raster analytics, ArcGIS Enterprise and Image Server. The hosting server is your portal’s server for standard portal administration and operations such as managing and dispensing processing, storage, and publication of results to raster analysis servers, image servers, and data stores. It also hosts the ArcGIS Data Store for GIS data and allows users to publish data and maps to a wider audience as web services. Raster analytics jobs are processed by image servers dedicated for raster analytics, comprised of one or more servers, each with multiple processing cores. The image processing and raster analytics tasks are distributed at the tile level or scene level depending on the tools and functions used. Raster analytics manages the processing results to either the ArcGIS Data Store on the hosting server for feature data products, or to the raster data store for imagery and raster data products. The raster data store can be implemented using distributed file share storage or using cloud storage such as Amazon S3 or Microsoft Azure blob storage. The image hosting server hosts all the image services generated by the raster analysis server. It includes the raster data store configured with the Image Server Manager, which manages distributed file share storage and cloud storage of image services using Amazon S3 or Microsoft Azure blob storage. The image hosting server stores and returns results requested by members of your enterprise. System configuration apps assign the roles of the servers and data stores, and also set the permission structure for all the users across your enterprise. This facilitates optimal flexibility in configuring and implementing your raster analytics system to address specific projects. Multiple servers can be scaled up for raster analytics processing and storage as required. See the tutorial to set up a base ArcGIS Enterprise deployment. More Information To learn more about raster analytics using ArcGIS Enterprise and ArcGIS Image Server, check out this video. Explore these help topics to get started with raster analytics: Propel Productivity to the Next Level with Raster Analytics Raster analysis on Portal for ArcGIS Configure the portal to perform raster analysis To see how raster analytics is being used, check out the Chesapeake Conservancy and Distributed Image Processing presentation, or attend the Plenary session at the 2017 Esri User Conference in San Diego to hear about Chesapeake Conservancy’s experience processing and sharing the entire Chesapeake watershed using raster analytics. Please plan to attend a few presentations addressing raster analytics at the 2017 Esri User Conference:
... View more
07-10-2017
03:46 PM
|
1
|
0
|
918
|
BLOG
|
The June 2017 update of ArcGIS Online includes some useful capabilities for displaying imagery served by your image services. These capabilities give you greater control for visualizing the information contained in your image services. When we talk about rendering, we’re not talking about making soap out of fat. Here at Esri, rendering is the process of displaying your data. How an image service is rendered depends on what type of data it contains and what you want to show. Once you search for and add a layer, and your image is displayed in Map Viewer, click the More Options icon then Display to open the Image Display pane. You see a new category named Image Enhancement. This is where the real fun begins. The Symbology Type options include Unique Values, Stretch and Classify. Unique Values and Classify renderers work with single-band image services, while the Stretch renderer works on both single and multiple band images. Unique Values Renderer Unique values symbolize each value in the raster layer individually and are supported on single band layers with Raster Attribute table. The symbology can be based on one of more attribute fields in the dataset. The colors are read from the Raster Attribute table and if they are not available the renderer assigns a color to each value in your dataset. This symbology type is often used with single band thematic data, such as land cover, because of its limited number of categories. It can also be used with continuous data if you choose a color ramp that is a gradient. Use the Field drop-down to select the field you want to map. The field is displayed in the table. Click the Color Ramp drop-down and click on a color scheme. If your image service already has a color ramp, such as the NLCD service in this example, it is displayed by default. The colors in the Symbol column and Labels can be edited as required. Click Apply to display the rendering in the layer Stretch The stretch parameters improve the appearance of your image by adjusting the image histogram controlling brightness and contrast enhancements. Either single or multiple band images can be stretched. For multiple band images, the stretch is applied to the band combination previously chosen in the RGB Composite options. The stretch options enhance various ground features in your imagery to optimize information content. 1. Click the Stretch Type drop-down arrow and choose the stretch type to use. The following contrast enhancements determine the range of values that are displayed. None – No additional image enhancement will be performed Minimum and Maximum – Displays the entire range of values in your image. Additional changes can be made by editing the values in the Min-Max grid (available only when Dynamic range adjustment is turned off.) Standard Deviation – Display values between a specified number of standard deviations Percent Clip – Set a range of values to display. Use the two text boxes to edit the top and bottom percentages. 2. If the Stretch type is set to an option other than None, the following additional image enhancement options will be available. Dynamic range adjustment – Performs one of the selected stretches, but limits the range of values to what is currently in the display window. This option is always turned on if the imagery layer does not have global statistics. Gamma – Stretches the middle values in an image but keeps the extreme high and low values constant. 3. For single-band layers, you can optionally choose a new color scheme from the Color Ramp drop-down menu after applying a stretch method on the layer. 4. Click Apply to display the rendering in the layer. Here’s a WorldView-2 natural color image of Charlotte, NC, using the default no stretch: And here is the same imagery layer with the top 2% and bottom 20% of the histogram omitted: Classify Renderer Classify symbology is supported by single band layers. It allows you to group pixels together in a specified number of classes. The following are the different settings available with the Classify symbology. Field – Represents the values of the data. Method – Refers to how the break points are calculated. Defined Interval – You specify an interval to divide the range of pixel values and the number of classes will be automatically calculated. Equal Interval – The range of pixel values are divided into equally sized classes where you specify the number of classes. Natural Breaks – The class breaks are determined statistically by finding adjacent feature pairs between which there is a relatively large difference in data value. Quantile – Each class contains equal number of pixels. Classes – Sets the number of groups. Color Ramp – Allows you to choose the color ramp for displaying the data. Classify symbology works with single band layers that have either a Raster Attribute Table or Histogram values. If a histogram is absent, it is generated when you select the symbology type. Here’s the classified map of Charlotte, specifying 15 classes and using the Natural Breaks method for determining class breaks: Summary These new Map Viewer image rendering capabilities are similar to what you are used to in ArcMap and ArcGIS Pro. Since this release, Scene Viewer also supports imagery layers, however we are still working on bringing the new Map Viewer image rendering capabilities into Scene Viewer. Check out these new imagery capabilities in ArcGIS Online and see how they can enhance the stories behind your data. Please leave us comments below for any future enhancements you’d like to see. And check back in a few months; we have a lot of other cool stuff planned for imagery in upcoming releases.
... View more
07-10-2017
03:26 PM
|
1
|
0
|
843
|
BLOG
|
Imagery can add valuable information and context to a wide array of GIS projects. For example, you can detect impervious surfaces for storm water management, map and manage riparian corridors, or track what’s changing in your county. Sometimes, though, incorporating imagery into your GIS can feel overwhelming—how can your system handle that much data? Enter raster analytics, a distributed processing, storage, and sharing system designed to quickly process large collections of aerial, drone, or satellite imagery, then extract and share meaningful information for critical decision support. Raster analytics can be run locally, but you can also pair it with distributed cloud computing to maximize efficiency. Image processing and analysis jobs that used to take days or weeks can be completed in minutes or hours, bringing imagery projects that were impossibly large or daunting within reach. Raster analytics leverages ArcGIS Enterprise, expanded with ArcGIS Image Server configured for distributed raster analysis, to integrate the components of the raster analytics system to support scalable, real-world workflows What can raster analytics do? By leveraging ArcGIS Enterprise with ArcGIS Image Server, raster analytics enables you to: Quickly process massive imagery or raster datasets in a scalable environment Execute advanced, customized raster analysis Share results with individuals, departments, and organizations within or outside your enterprise The scalable environment of raster analytics empowers you to perform computationally intensive image processing that would otherwise be out of reach or cost-prohibitive. When implemented on-site, raster analytics uses distributed processing to improve efficiency. You can also maximize efficiency by exploiting cloud platforms such as Amazon Web Services or Microsoft Azure, which allow you to dynamically increase or reduce your capacity based on the size and urgency of your projects. Either implementation can save you time, money, and resources. Raster analytics uses all the advanced image processing and analysis capabilities of ArcGIS Pro to maximum advantage. Built-in raster functions cover preprocessing, orthorectification and mosaicking, remote sensing analysis, and an extensive range of math and trigonometry operators, while your custom functions can extend the platform’s analytical capabilities even further. Raster analytics is also designed to streamline collaboration and sharing. Users across your enterprise can contribute data, processing models, and expertise to your imagery project, then share results with individuals, departments, and organizations in your enterprise. Finally, raster analytics integrates your image processing and analysis with the world’s leading GIS platform, and allows users to seamlessly draw on Living Atlas of the World, the world’s largest collection of online digital maps and imagery. How is raster analytics used today? The Chesapeake Conservancy, working with the University of Vermont and WorldView Solutions, was tasked by the Chesapeake Bay Program to produce one-meter-resolution land cover maps covering 100,000 square miles of the Chesapeake Bay watershed. These high-resolution land cover maps, which classify natural and man-made landscape features, are crucial for supporting watershed and storm water management, conservation, and for reducing pollution into the bay. To produce this essential dataset, the Chesapeake Conservancy needed to process over 20 terabytes of raster data and categorize it into twelve land cover types. This project took a daunting 18 months to complete using their local machine resources. As a result, Chesapeake Conservancy is now working with raster analytics in the cloud to make this timeline more efficient and cost-effective going forward. As a proof of concept, they used raster analytics to produce a persistent one-meter land cover dataset of Kent County, Delaware (798 square miles). The Kent County project—comprised of more than 30GB and 3.8 billion pixels of raster data—ran on a ten-machine cluster, each with twenty cores, and completed in less than 5 minutes. This same job took days to to process on their local machines. The Chesapeake Conservancy is now engaged in reprocessing the entire Chesapeake watershed to benchmark time and cost savings using raster analytics for the project. Using raster analytics for projects in the future will mean that the Chesapeake Conservancy can accomplish ambitious projects in a timely and cost-effective manner, without having to spend resources to acquire, configure, and maintain a large computing and storage infrastructure. See the Chesapeake Conservancy and Distributed Image Processing presentation for more details, or check out the Plenary session at the 2017 Esri User Conference in San Diego to hear about Chesapeake Conservancy’s experience processing and sharing the entire Chesapeake watershed using raster analytics More Information: To learn more about raster analytics using ArcGIS Enterprise and ArcGIS Image Server, check out this video. Explore these help topics to get started with raster analytics: Get Started with Raster Analytics Raster analysis on Portal for ArcGIS Configure the portal to perform raster analysis Please plan to attend a couple presentations addressing raster analytics at the 2017 Esri User Conference:
... View more
07-10-2017
02:36 PM
|
0
|
0
|
715
|
BLOG
|
Hi Frank, If you have the provisioning file, you have the software and the license. Please call esri technical support for help in authorizing the FMV Add-In. Jeff
... View more
12-08-2016
03:13 PM
|
0
|
0
|
2579
|
BLOG
|
Frank, To obtain the FMV Add-In for your version of ArcGIS Desktop, visit the FMV Product Page at www.esri.com/FMV. Go to the bottom of the page for instructions. Jeff
... View more
12-08-2016
01:25 PM
|
0
|
0
|
2579
|
POST
|
Trevor, Many Thanks for the heads up. Can you provide a link to the download? Jeff
... View more
12-08-2016
09:21 AM
|
0
|
1
|
880
|
POST
|
Hi Mark, Thanks for your video and associated metadata, which were collected from a ground-based vehicle. It looked like the sensor was mounted on a stationary mount on the vehicle (not on a moving gimbal) with a fixed zoom setting. I was able to create 2 videos with embedded metadata as described below, but would like to set some context first. To create a MISB-compliant video, the FMV Video Multiplexer needs a minimum of 11 essential parameters to compute the map transform between the video frame and the map. This enables the display of the video footprint on the map in ArcMap, marking features in the video and map display, simultaneous display of GIS data on the map and the video player, searching video archives, and more. The minimum metadata parameters are: UNIX Time Stamp, Platform Heading, Platform Pitch, Platform Roll, Sensor Latitude, Sensor Longitude, Sensor Altitude, Horizontal FOV, Sensor Relative Azimuth, Sensor Relative Elevation, Sensor Relative Roll. See example csv below. 1433430000000000,320,0,0,39.919767,-105.117297,1690,65,0,-25,0 1433430001000000,320,0,0,39.919767,-105.117297,1690,65,0,-25,0 The Contour metadata contained only 5 of the 11 essential parameters, time stamp, latitude, longitude, heading and altitude. When multiplexed, these result in the sensor ground track displayed on the map in ArcMap with the sensor icon pointing in the correct direction based on the heading info. See the video and metadata csv I posted on your data download site. However, these few parameters alone do not support the computation of the video to map transform. From the video, it looked like the sensor was looking 90 degrees from the vehicle and slightly down. Based on this, I assigned default values for the other parameters; pitch = 0 degrees, roll = 0 degrees, Horizontal FoV 120 degrees (the actual FoV would come from your camera specs), sensor relative azimuth = 90 degrees, sensor relative elevation = -5 degrees and sensor relative roll = 0 degrees. See the metadata csv file on the data download site. Input both the video and the updated metadata csv file into the Video Multiplexer GP tool, expand the Calculate Corner Coordinates options and click the checkbox and input your lowest elevation value in the metadata; -4 meters in your case and hit OK. This results in a MISB-compliant video that enables marking and the other functionality mentioned above. When creating your metadata csv with the required 11 parameters, use the MISB tag names in the csv supplied. Or the field heading names can be whatever you want and be mapped to the MISB tag names in the attachment. For more details about multiplexing metadata and video data, please refer to the FMV Users’ Manual at https://community.esri.com/docs/DOC-8607. Good Luck with your work! Jeff
... View more
12-02-2016
01:37 PM
|
0
|
0
|
1338
|
Title | Kudos | Posted |
---|---|---|
1 | 10-25-2016 11:33 AM | |
1 | 08-03-2016 11:34 AM | |
1 | 03-21-2019 01:36 PM | |
1 | 11-07-2016 04:58 PM | |
1 | 10-26-2015 09:33 AM |
Online Status |
Offline
|
Date Last Visited |
08-09-2023
12:43 AM
|