POST
|
Ok, Understood. Well you can setup AGS in HA easily with hundreds of VMs processing data, however I doubt this will be the best option available. I will use python tools or other specific software. M
... View more
04-30-2015
02:52 PM
|
0
|
1
|
1104
|
POST
|
Hi Sergio/Hola I think exist a bit of confusion here. gis1 and gis2 are your ags servers right? and you have installed on these same servers the web adaptor. so for example you installed on server gis1 the web adaptor wa1? is that correct? because need to be like that anyway. you need to open the port 6080 on server gis1 only if you intend to access the rest via the "normal" ags rest url: something like http://gis1:6080/arcgis/rest/services if you don't intend to use port 6080 no need to be open. you can use the web adaptor url which uses port 80: http://gis1/wa/rest/services wa is the webadaptor name you gave when installed the software/ In any case you will need a cluster (network alias) for the High availability to work. for example you can create microsoft network load balancer and create a alias called agscluster to point primarily to server gis1 and if the checks fails automatic point to gis2 server. your ags address to use in arcgisdesktop and web applications will be something like: http://agscluster/wa/rest/services vary in mind you will need a second license for AGS to in HA how swap (second server always on and sharing load). You can use same configuration just for cold failover using only one license if the second server is always off and not run any map services (cold failover) have fun! Miguel
... View more
04-24-2015
01:18 AM
|
0
|
0
|
317
|
POST
|
Hi Kenneth, I cannot understand very well your question, but I will try to bring some clarification. If you just need to configure a SDE instance then use local disks. If you want to configure SDE in failover or High Availability then configure an SQL cluster where the database is stored in a common place in a SAN (not in the same rack as sql1 or sql2 servers). The advantages in a failover configuration is that is sql1 servers goes, sql2 server will automatically take over and user will not be disrupted. of course you need to configure your NLB, and have an primary and secondary (active /passive) sql server. Since it use the same database all data is updated and synchronized (not really, just the 2 applications use the same database). Clear or clear as mud? M
... View more
04-16-2015
07:27 AM
|
0
|
0
|
772
|
POST
|
Yes, looks like it is right. I found this that can help: Adding labels to ArcGIS Online web maps: Part 1 | ArcGIS Blog Other option is having a map service just for the labels..
... View more
04-16-2015
05:02 AM
|
2
|
0
|
943
|
POST
|
Hi Vincent, all will depend of the data itself, but from my experience it will not be more than 1gb a month. You can check it installing a application as whireshark or Fiddler to check the requests made to your application in a typical zoom / pan / identify /print usage. we at Exprodat consulting developed tools to find out how a map service load affect a AGS server performance but we don't have a free tool. Contact me at mmorgado@exprodat.com if you need more help Miguel
... View more
04-16-2015
03:11 AM
|
1
|
0
|
382
|
POST
|
Hi Johan, have you raised this with ESRI as a potential bug? Can you specify the AGS versions and if you still need help with this issue? Thanks Miguel
... View more
04-16-2015
03:07 AM
|
0
|
2
|
943
|
POST
|
Hi Kenneth, We at Exprodat Consulting can help you with this if you needed. I will try to help you the most I can. Basically for the AGS cluster with 10.2.1 10.2.2. or 10.3 it works in this way. you install the ags servers software in the servers you want and how many you wish. Just create a site in the main principal server. access the ags manager and check that site is working well. Find a SAN/NAS where you will store in future all your AGS Site configuration and data (if you will copy data to the server for example). Of course the access to this SAN need to be really quick from all AGS servers (ideaaly) and not located on the same rack as the AGS servers. Talking about Racks we just use VMs and it works fine. The tricky part is the infrastructure sizing and AGS performance which need some technical input to be successful. Second part of the job is to point the config-store and directories from the first initial site to the SAN/NAS location. after checking all is working, just run the manager on the others ags configurations and add the ags servers machines to the main ags site! job done. Of course you will need to have an ags license for each 4 cores. SDE cluster will depend of the DBMS. We at Exprodat can help you with that as well. You can read some of my posts in Linked in relation to AGS High availability here: Cloud - Building High Availability Applications - Week 1 | Luis Miguel Morgado | LinkedI Cloud - Building High Availability Applications - Week 2 | Luis Miguel Morgado | LinkedIn ESRI AGS - Cloud security and final steps | Luis Miguel Morgado | LinkedIn Contact me on mmorgado@exprodat.com if you have any questions. miguel
... View more
04-16-2015
03:01 AM
|
0
|
3
|
772
|
POST
|
Hi Patrica, what kind of data do you want to serve? If what you want is setup map services to be used by web applications AGS in high availability with load balancer will be the solution. If what you want to use is geoprocessing tools then probably you need to look in a diferent way. Can you provide more details?
... View more
04-16-2015
02:39 AM
|
0
|
3
|
1104
|
DOC
|
Hello, Our GIS workgroup is testing a new spatial database which utilises SDE 10.1. Our GIS workgroup has found issues with the performance of redisplay of geospatial data on the test database â?? thus far. We have defaults install. Our aim is for at least similar display performance with the new database. Unfortunately, we have found redisplay times for spatial datasets are up to 2-3 times slower under the new database setup when compared to our â??oldâ?? (SDE 9.3.1) setup. Environment: ArcGIS 10.1 (build 3035) on Microsoft Win XP v2002 SP3. 3.3Ghz with 2 G RAM. Oracle 11r2 on Sun Solaris (Geodatabase enabled SDE schema v 10.1) Oracle 11r2 on Sun Solaris with SDE 9.3.1 (Note: This â??oldâ?? environment forms the â??baselineâ?? for performance testing against the new system/environment). The performance tests: â?¢ A bookmarked extent sourced from a PC (specs above) with ArcMAP 10.1 installed. â?¢ Polygon datatypes ~400,000 spatial records: o ST_Geometry (10.1) o Low Resolution ESRI binary (9.3.1) Find below a copy of the two queries (using OEM) as received at the database: Query on Oracle-SDE 10.1 database: SELECT 1 SHAPE, VEGGROUP, TEST_TV2_ST2.OBJECTID, TEST_TV2_ST2.SHAPE.points,TEST_TV2_ST2.SHAPE.numpts,TEST_TV2_ST2.SHAPE.entity,TEST_TV2_ST2.SHAPE.minx,TEST_TV2_ST2.SHAPE.miny,TEST_TV2_ST2.SHAPE.maxx,TEST_TV2_ST2.SHAPE.maxy,TEST_TV2_ST2.rowid FROM BASE.TEST_TV2_ST2 TEST_TV2_ST2 WHERE SDE.ST_EnvIntersects(TEST_TV2_ST2.SHAPE,:1,:2,:3,:4) = 1 Query on Oracle-SDE 9.3.1 database: SELECT /*+ LEADING INDEX(S_ S945_IX1) INDEX(SHAPE F945_UK1) INDEX(TASVEG_VEGETATION_ONLY A945_IX1) */ SHAPE, VEGGROUP ,S_.eminx,S_.eminy,S_.emaxx,S_.emaxy ,SHAPE.fid,SHAPE.numofpts,SHAPE.entity,SHAPE.points,SHAPE.rowid FROM (SELECT /*+ INDEX(SP_ S945_IX1) */ DISTINCT sp_fid, eminx, eminy, emaxx, emaxy FROM SIPS_DBA.S945 SP_ WHERE ((SP_.gx >= :1 AND SP_.gx <= :2 AND SP_.gy >= :3 AND SP_.gy <= :4 ) OR (SP_.gx >= :5 AND SP_.gx <= :6 AND SP_.gy >= :7 AND SP_.gy <= :8)) AND SP_.eminx <= :9 AND SP_.eminy <= :10 AND SP_.emaxx >= :11 AND SP_.emaxy >= :12) S_ , SIPS_DBA.TASVEG_VEGETATION_ONLY , SIPS_DBA.F945 SHAPE WHERE S_.sp_fid = SHAPE.fid AND S_.sp_fid = SIPS_DBA.TASVEG_VEGETATION_ONLY.SHAPE Findings: Query on SDE10.1: â?¢ takes 70 seconds to redisplay in ArcMAP 10.1. â?¢ Analysing the Explain Plan of the 10.1 SDE query shows it has a very high cost. â?¢ took 58 seconds to process on the Server. â?¢ utilises the layer geometry to perform the sub-selection. Query on SDE9.3.1: â?¢ takes 30 seconds to redisplay in ArcMAP 10.1 â?¢ Analysing the Explain Plan of the 9.3.1 SDE query shows it has a low cost. â?¢ takes 4 seconds to process on the Server. â?¢ utilises Oracle Optimiser Hints. â?¢ uses the layers Spatial index. We have some questions and concerns: 1. Why does the query hit the database with different syntax - when the Client application is the same (ArcGIS 10.1), as is the (bookmarked) extent? a. Why are Oracle optimiser hints not being used in SDE 10.1? b. Why is the query in SDE 10.1 utilising feature geometry layer instead of the spatial index to perform the query? (Note spatial index is available and built) Looking forward to your insights on this matter. Regards, Simon Different geometry storage is queried differently. ST_GEOMETRY uses LOB, while SDEGEOMETRY uses LONG RAW (which is basically your 2.5x difference right there). Spatial index preference would be determined by information you haven't provided (the size of the search window, the envelopes of the layers, and the number of features returned by the query). - V vangelo;301582 Hello, Thanks for you're response... it has prompted discussion... and continued testing. You mention SDEGEOMETRY (ESRI Binary) uses Long Raw and therein lies the 2.5 times performance difference to ST_GEOMETRY. I have since tested another datatype - 'SDELOB' (also LOB): I found very 'similar' redisplay performance to the (depricated?) 'ESRI binary' (SDEGEOMETRY) format mentioned above (ie Long Raw = LOB). I am a little confused as to why such a significant redisplay difference might exist bw LOB formats. vangelo;301582 Re: Spatial Index Preference of ST_GEOMETRY: The search window size is exactly the same for all queries (its bookmarked). The query window (envelope?) is quite 'large' and returns 100,000 records (out of a possible 400,000). Incidently, the query at larger scales (ie zoomed in closer) DOES use the Spatial Index and performance is good. I am somewhat confused as to why SDE/ArcMAP v10.1 would NOT use the Spatial Index to search at smaller (zoomed out) scales? Makes me wonder whether the use of the Spatial Index configurable? Thanks for your input. s/SDEGEOMETRY/SDEBINARY/g (Doh!) One quarter of the features is probably past the point where a spatial index would be appropriate. The exact ratio may have been changed over time. It's not "configurable", but if you toy with the layer envelope extent, you might see a difference. You can also toy with the ATTRIBUTE_FIRST query flag with scale-dependent layers to force full table scan simple shape filtering. I take steps to avoid ever needing to render 100k features from any table in any one query, ever, so this benchmark isn't exactly unbiased. - V Thanks Vince, our testing is continuing. vangelo;302344 Re above: I am still a little confused as to what i expect would be going on with these spatial queries. As I would have thought the most efficient way to query a large polygon geodatabase is (almost always?) using a spatial index? Perhaps points and lines may not perform as well with a SI (intuitively)? Depends on many factors i suppose? But, it should be the preferrred way for polygons - almost always. But possibly this is only my lack of understanding shining through! Incidentally, I reran the same spatial queries from ArcGIS 10.1 on the following datatypes : * SDO geometry (SDE 10.1) uses its Spatial Index query - 65 secs to refresh display (~100k records). * ST_Geometry (SDE 10.1) does not utilise the Spatial Index query - 90 secs to refresh display (~100k records) * SDELOB (SDE 10.1) uses its Spatial Index query - 35 secs to refresh display (~60k records) * SDE Binary (SDE 9.3.1) uses its Spatial Index query - 36 secs to refresh display (~60k records) This runs counter to what i would expect -> ST-Geometry should be faster/est should it not? Any ideas on what is going on or what i might have missed? -> Like i do wonder if this *new* database is fully 'configured'. Simon supasim1;303798 Why would you expect that? Index efficacy is an established science. There's nothing special about polygons which would suddenly improve the cost/benefit relationship of index I/O to full table scan I/O; in fact, I would expect polygons to be *more* expensive than simpler features. You've left out too many details to begin to evaluate why this one query performs in the manner it does. As stated earlier, I try to avoid any query which returns a significant fraction of a large table, so most of my efforts are spent optimizing for small random searches. - V vangelo;303803 You misunderstand a lot of what i have stated. I am not questioning the vurtues of Spatial Indexing? If you read my posting more closely I said "I would have thought the most efficient way to query a large polygon geodatabase is (almost always?) using a spatial index? Perhaps points and lines may not perform as well with a SI "! I am saying polygons are slower than SI. ie I *agree* with you. vangelo;303803 Our Organisation utilises a lot of large GIS data. Our business requires results of large queries displayed in GIS/applications. Optimising for small window searches is not required (performance is acceptable) - The performance of Large areal window searches is where the problem is. Hence this posting. vangelo;303803 This one query is an example of the systematic poor and unsatisfactory performance we get with the ST_Geometry data type. This runs contrary to what we have read in these forums and elsewhere!? Thanks for your attempts to assist. Has anyone else experienced poor spatial query of an SDE 10.1 db for larger queries with ST_Geometry? If so - i would appreciate hearing from you. Regards, Simon I *am* questioning the virtue of spatial indexes, especially with respect to large result sets. It is not possible for an index to "almost always" be the most efficient way to access data. The principles involved are the same for why an index on age then gender would be more efficient than gender then age, but when the query is "locate males aged 20 to 60" you'll be in for a wait. Any spatial query returning 25% of 400k rows is likely to process over a million index tile hits, most of which are redundant. And once the records are identified, they'll still need to be extracted from the table, whose pages may be cluttered with rows that don't match, or don't match yet. This is where the full impact of spatial fragmentation rears its ugly head -- index query IO is compounded by multiple reads of the same blocks which fail to cache. The thing is, there isn't much you can do to change the horrific I/O cost. You can reduce the storage precision, which can lop an order of magnitude off the storage. You can spatially defragment the data. And you can increase the grid size to reduce duplicates (at the cost of false positives). After that, you're down to pinning the table in RAM (or using the standard techniques to avoid large rowsets). - V Our investigations back up some of what you say - V. We have tested query formulation and performance on the database: 1) The 'ESRI binary' (SDE 9.3.1) datatype performs the query using Optimiser Hints and the Spatial Index (SI) - It has negligable 'performance cost to the db'. 2) The SDELOB (SDE 10.1) query too is very efficient/fast in terms of I/O. It formulates queries using Oracle Optimiser Hints incorporating the SI - very fast! Interesting! However, 3) the ST_Geometry datatype query uses 'ST_EnvIntersects' in the query and as a result must presumably do a full table scan to return results. It does *not* use any Oracle Optimiser Hints in the query formulation and has a *huge* (measured) database-cost to run. All these queries are on the same geographic extent. Our tests reveal vast differences with the level of I/O for different data types/versions of SDE. Queries formulated using optimiser hints and SI are the most efficient *by far*. Backing up what Vince mentions in his previous post about query formulation. I think (???) the problem i have may be related to this KB 38019. I note this KB artical does not seem to apply to SDE 10.1 though? But appears similar. Our investigations seem to indicate the ST_Geometry is doing a full table scan to resolve "ST_EnvIntersects". The database-cost of this query is much, much greater than that of formulating the query using the using Optimser Hints and SI (for large returns). ESRI - Why does ST_Geometry not invoke the Oracle Optimiser Hints in query formulation (as do earlier/other ESRI data types)? Regards, Simon ST_EnvIntersects normally *does* use an index. The way to ask Esri questions is to start a Tech Support incident. - V Have raised a Support query with ESRI re this matter. I will post response/resolution when found. supasim1;305169 Please do post your results for your Esri Tech. Support inquiry. This is a good topic. Thanks to Vangelo for his information thus far. Hi, Any updates to share, buddy ? Thanks supasim1;305169 This document was generated from the following discussion: Performance on SDE 10.1 vs 9.3.1 (Oracle)
... View more
08-02-2014
05:19 AM
|
0
|
0
|
3156
|
POST
|
So, Imagine you are in the future (let say 2016) and you don't manage anymore your AGS and only consume it as a Service (Read ArcGIS Online), imagine your wish to publish some map services and you want to know how much it will cost you. Here is the answer: ArcGIS Online | Credits Estimator Visit this site to have an idea of the costs involved in run a webapp with AGOL : ESRI Conservation Program: conservation geography, activism and multicultural social change So is AGOL the future of web mapping? When AGS start to be economic viable? Is AGS an alternative to AGOL just for security reasons?
... View more
08-02-2014
02:14 AM
|
0
|
0
|
1309
|
BLOG
|
"Should we or not adopt the Cloud? Does it really reduce IT expenditures? What are its advantages / disadvantages versus in house IT? Is it the right choice for your business? Many individuals/organizations are still trying to figure it out. So let’s review cloud strengths and weaknesses under different scenarios." Cloud for Startups 2 startup companies A and B, in the same business field, are getting prepared in order to launch their operations and need to setup their IT environment. Each company is initially composed of 5 employees and is expected to hire 25 new employees within one year. Both companies will be using 2 main applications: email and accounting software. Company A decided to purchase its own hardware, and had to perform the following actions: Dedicate a room to be used as a server room. Contact IT vendor(s) in order to Discuss the setup and design of the IT infrastructure and applications: high availability, hardware needed, number of current users and expected users in 3 to 5 years … Prepare the server room in terms of cooling, electrical setup, fire suppression … Finalize negotiations and contract(s) after choosing one or many vendors. Wait for Hardware delivery and setup/configuration of the environment to be done. Company B decided to use the Cloud applications, and performed the following steps: Using a credit card, signed up for cloud emails and accounting applications for 5 users and configured both in a few days. Advantages of company’s B approach versus company’s A approach: Reduced time to market. Instead of wasting at least 2 months (if all goes perfectly well), company B needed few days to setup its IT environment. No need to invest ahead. Traditional IT expenditure has been very capital intensive: Company A paid a considerable amount of money in order to purchase Hardware, software and other equipment and services. Acquiring capital for large purchases is difficult (especially for smaller organizations). Using cutting edge technology and experts support. Company B is using the Cloud Vendor’s Infrastructure which is usually equipped with latest technology, highly available, Disaster Recovery included or easily enabled by paying small extra fees. Company B will get serviced by highly skilled experts. Accessing resources when needed and as needed. Suppose that the 2 companies in the first year did not hire new employees. Company A have paid for extra hardware capacity which was not needed Suppose now that the 2 companies grew in 3 years in an unexpected way and reached 200 employees with a large customer base Company A might need to change or upgrade its existing hardware: a time consuming and costly operation Company B will just need to purchase licenses for new users Release IT resources which can focus on core business applications. Company’s B IT department will be able to focus on the strategic aspects of its role and business applications by minimizing time spent on maintenance. Company’s A IT will spend a considerable amount of time on maintaining the status quo. Cloud for established and/or large companies. The story here is different, since there are many factors that need to be taken into consideration and questions which must be answered: Do we need to move fully or partially our working IT infrastructure and/or business applications to the cloud? Will it reduce IT operational cost? Is it going to give us a strategic advantage over competition? Is it the right moment to initiate the move taking into consideration: Time and efforts needed impact on all departments, IT security, working processes and business workflows which will ultimately need to be altered other crucial ongoing projects which might be disrupted. Will systems/applications performance be impacted? Is it better to use existing infrastructure in order to deploy new solutions? or use the cloud ? In case the solution needs to interact with existing in house applications, is it going to be harder to fulfill this requirement? There is no simple or single answer to each of these questions and each company will need to study thoroughly its options before taking any action. Other Cases where the cloud can be beneficial Unexpected or variable load: Elasticity is an important characteristic of the cloud (check article What is the Cloud?) which allows supporting variable or unexpected loads. As an example, a company is selling online products, after an interview with the CEO on TV, a considerable number of users accessed the site online. Using the cloud the site will be able to cope with this unexpected load allowing for more revenue, while if the site was using traditional hardware it might have failed to handle the load and users will find themselves unable to purchase anything online due to the site becoming very slow or unreachable. Another example is a company similar to Facebook who has to deal with variable loads on a daily basis. Need for resources/applications for a specific period of time: A company which needs to develop a test environment or an application for a specific project can provision quickly the needed environment using the cloud. The environment can be released once the project is closed. Note that the company will only be charged for the time the resources where used. Noncomplex common applications: Some companies just need to receive/send emails or use software/application as is with no or minor modifications. These kinds of applications are easier to provision using the cloud. Cloud pain points Confidentiality: It’s hard to guarantee confidentiality in general so how can you trust third party vendors for part if not most of your data, especially after the latest NSA spying scandals? Confidentiality agreements should be properly reviewed. Upper management & employees’ mentality: Change is hard and cloud supporters might face fierce resistance. Internet cost & quality: In some countries the cost of the internet is still too high making it really expensive to access and use cloud services. Internet quality is sometimes poor (high latency, packet loss …) which also impacts the interaction with some cloud services. Legal, compliance and Regulatory issues: Many articles discuss these issues. Check this article by Vic (J.R.) Winkler http://technet.microsoft.com/en-us/magazine/hh994647.aspx. While major efforts are being conducted towards the adoption of the cloud, many barriers still need to be addressed. Businesses should carefully decide whether or not to use the cloud by weighing the advantages and disadvantages. Read more here: Is Cloud Computing The Right Choice For Your Business ? | XchangeTech
... View more
07-18-2014
04:51 AM
|
1
|
0
|
1045
|
DOC
|
07-17-2014
03:33 AM
|
2
|
0
|
548
|
POST
|
Hi Daniel, Are you asking if you can access the map service located in the Amazon EC2 AGS server from your Portal? If this is the question, then yes you can. The external map services will act as any ESRI basemap. However for that work you need to allow the URL to pass through your firewall. For use AD authentication on your map services you will need to setup a AD on your Amazon server and configure the connection between your company and this server (e.g.: site to site VPN).
... View more
07-15-2014
03:14 AM
|
1
|
0
|
270
|
Title | Kudos | Posted |
---|---|---|
1 | 04-16-2015 03:11 AM | |
3 | 07-15-2014 02:23 PM | |
2 | 04-16-2015 05:02 AM | |
1 | 07-15-2014 03:14 AM | |
2 | 07-17-2014 03:33 AM |
Online Status |
Offline
|
Date Last Visited |
11-11-2020
02:24 AM
|