POST
|
Yes, new environments will now go to the user's AppData/Esri/conda directory which should allow them to interact with it and not experience permission issues.
... View more
06-27-2018
12:01 PM
|
2
|
1
|
1576
|
POST
|
You should now be able to clone environments in 2.2 via the Manage Environments dialog by selecting an environment and clicking the clone button to the right hand side.
... View more
06-27-2018
11:59 AM
|
0
|
3
|
1576
|
POST
|
Ah yes I misunderstood what you meant, the notebooks on the sandbox site can be downloaded locally - I figured you had done so and were running it on your machine. If you do this, then you should be able to import the csv using that method. If you're using the online notebooks then you'd need to upload the csv to an accessible online location and provide the url instead of a file path.
... View more
02-09-2018
11:24 AM
|
1
|
0
|
980
|
POST
|
Assuming the file path is definitely correct, I have seen cases where copy/pasting the file path straight from Windows Explorer carries over an 'invisible' unicode character which can cause this issue. Please see this stack overflow post for more info: python - pandas.read_csv file not found despite correct path with raw text - Stack Overflow
... View more
02-08-2018
02:43 PM
|
0
|
2
|
980
|
POST
|
Thanks, seems like there's some issue with the extension in rare conditions, I'll investigate some more.
... View more
10-04-2017
03:28 PM
|
1
|
0
|
294
|
POST
|
For a bit of context, the reason this happens is because the data access (.da) cursors are implemented as iterators which by design only allow one 'pass' and will then raise StopIteration exceptions on subsequent calls. If you want to repeatedly access the collection, or access items within the collection via an index, you can cast the iterator into a list. The .reset() method also works for multiple passes as Joshua has suggested but does not allow for indexing. The list approach is only viable if the data set you are working with is smaller than the amount of available RAM, so for larger data sets the iterator is necessary to feed the data 'piece by piece' into the script, so depending on the circumstances either approach has strengths and weaknesses.
... View more
09-26-2017
05:01 PM
|
1
|
0
|
391
|
POST
|
In general its much easier to test scripts when they have been heavily modularized. The .py file that the toolbox points to, or the .pyt file for Python toolboxes can both be very 'thin' in that they only need to handle the importing of arcpy and the parameter i/o, then the rest of the logic split across several modules which are imported. This makes testing those modules easier, as they can be separated into those which require arcpy and those that don't. For example, rather than having a section of the script which processes each row in a feature class within a loop, break that logic out of the main script into an arcpy-bound function which takes a path to a table and list of fields, then generates rows from the table via an arcpy.da.cursor. This function can easily be mocked to return tuples, because it can be called independently of the main script. This also allows you to easily debug this code in an IDE, since it can be run completely independently of the ArcGIS UI at that point.
... View more
09-26-2017
02:58 PM
|
0
|
0
|
1946
|
POST
|
Thanks that's a great tutorial! I hadn't seen it before.
... View more
09-26-2017
02:44 PM
|
0
|
0
|
1946
|
POST
|
Yeah that's fair yours would be quicker, Counter certainly is not the fastest data structure available, since its implemented in Python and not native C. (It is substantially improved in Python 3, at least) But still it makes the function readable and is fine to use until optimization is required. Different tools for different problems, I guess
... View more
09-26-2017
12:41 PM
|
0
|
0
|
670
|
POST
|
Nice You can also use a collections.Counter dictionary subclass instead of doing it manually.
... View more
09-26-2017
12:11 PM
|
1
|
2
|
670
|
POST
|
Hello Tobias, Once you have added the new channel you still have to refresh conda's metadata, its done automatically before conda runs most commands on the command line or clicking the blue circular arrow button in the UI. We ran into some complications with the metadata json files due to UAC when installed in Program Files so that experience will be improved going forward. The conda executable that Pro is using is within the %ProInstall%/bin/Python/Scripts directory, so you can work with conda via the command line if its on your PATH. Alternatively there is a batch file which should be in the start menu in the ArcGIS program group under 'Python Command Prompt' which will open to the environment's location and activate it for you. Hope that helps get you in the right direction!
... View more
09-25-2017
09:54 AM
|
1
|
1
|
2714
|
POST
|
With the way the installer currently works, if you modify files within the Pro folder (ie by updating them with conda) then it won't remove those files when uninstalling. This is because the file hash has changed. So you're seeing two versions of pandas likely because when you reinstalled it laid down an older version beside the updated one that wasn't deleted. Its an annoying problem and we're finding ways around it for upcoming versions. Currently you may have to manually delete the %pro%/bin/Python/envs folder when uninstalling to ensure everything is cleaned up, or manually remove that pandas version from the environments lib/site-packages directory.
... View more
09-22-2017
04:52 PM
|
1
|
0
|
1323
|
POST
|
I'd say this might be easiest to accomplish in two passes over your feature class using data access cursors, at least to make it understandable when you're learning Python. The data type 'dictionary' in Python is a great way to deal with tabular data, the first pass on your table can use an arcpy.da.SearchCursor to create a dictionary where the keys are your rpsuid numbers, and the value is a list of the features' ObjectIDs that have the same rpsuid. Then on the second pass with an arcpy.da.UpdateCursor, you can check the rpsuid against the dictionary's keys, which will return the list of ObjectIDs. Then find the ObjectID's position in the list and append it to that Feature's facility to ultimately get the final value for the idpk field. I'll leave it as a learning exercise But feel free to ask if you want some help with the code.
... View more
09-22-2017
04:47 PM
|
1
|
0
|
1976
|
POST
|
This functionality will be available in Pro 2.1 under arcpy.mp.ConvertWebMapToArcGISProject, it was one of the more complicated to port between versions due to the addition of Projects in Pro.
... View more
09-22-2017
04:28 PM
|
2
|
0
|
614
|
POST
|
Jupyter is awesome and I enjoy using it, but I wouldn't call it an IDE. An IDE will typically include things like build automation tools, utilities to interact with the underlying file system, built in debugging and so on. Jupyter is an excellent tool for rapid data exploration, creating presentations/documents with embedded code and for creating websites that display real-time data. When building a full Python application then an IDE such as PyCharm or Visual Studio will be more suitable for your needs. Some Python IDEs include an .ipynb editor (such as PyCharm) which allow you to view notebooks as well.
... View more
09-15-2017
11:28 AM
|
1
|
0
|
1810
|
Title | Kudos | Posted |
---|---|---|
1 | 11-23-2016 11:51 AM | |
1 | 12-27-2016 03:58 PM | |
1 | 11-28-2016 09:17 AM | |
1 | 12-08-2016 05:09 PM | |
1 | 12-28-2016 12:00 PM |
Online Status |
Offline
|
Date Last Visited |
11-11-2020
02:25 AM
|