<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic arc.write dataframe chunking? in R-ArcGIS Questions</title>
    <link>https://community.esri.com/t5/r-arcgis-questions/arc-write-dataframe-chunking/m-p/804076#M145</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;I have a very large data.frame in R (over 13m rows with 16 variables). Attempts at using &lt;EM&gt;arc.write&amp;nbsp;&lt;/EM&gt;to output the data.frame as a new table object in a geodatabase have repeatedly failed for what I guess to be memory constraints. How can the &lt;EM&gt;arc.write&lt;/EM&gt; method be used to chunk this large table into the geodatabase so as not to exceed memory limits?&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Mon, 22 Oct 2018 23:29:33 GMT</pubDate>
    <dc:creator>WyattArnold</dc:creator>
    <dc:date>2018-10-22T23:29:33Z</dc:date>
    <item>
      <title>arc.write dataframe chunking?</title>
      <link>https://community.esri.com/t5/r-arcgis-questions/arc-write-dataframe-chunking/m-p/804076#M145</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;I have a very large data.frame in R (over 13m rows with 16 variables). Attempts at using &lt;EM&gt;arc.write&amp;nbsp;&lt;/EM&gt;to output the data.frame as a new table object in a geodatabase have repeatedly failed for what I guess to be memory constraints. How can the &lt;EM&gt;arc.write&lt;/EM&gt; method be used to chunk this large table into the geodatabase so as not to exceed memory limits?&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 22 Oct 2018 23:29:33 GMT</pubDate>
      <guid>https://community.esri.com/t5/r-arcgis-questions/arc-write-dataframe-chunking/m-p/804076#M145</guid>
      <dc:creator>WyattArnold</dc:creator>
      <dc:date>2018-10-22T23:29:33Z</dc:date>
    </item>
    <item>
      <title>Re: arc.write dataframe chunking?</title>
      <link>https://community.esri.com/t5/r-arcgis-questions/arc-write-dataframe-chunking/m-p/804077#M146</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hello Wyatt,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;You can use the pattern below to read and write chunks for large rasters. In this scenario you read a large raster (call it raster_file), process it and write it in chunks to a new raster (let's call it r_new)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;PRE&gt;r_read &amp;lt;- arc.raster(arc.open(raster_file))&lt;/PRE&gt;&lt;P&gt;#Define Blank Raster to Write (with the same extent and discretization as the original stored in object r_read)&lt;/P&gt;&lt;P&gt;r2 = arc.raster(NULL, path=tempfile("r_new", fileext=".img"), dim=dim(r), pixel_type=r$pixel_type, nodata=r$nodata, extent=r$extent,sr=r$sr)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;#Loop Through the Rows of the Large Raster (assuming you can hold one row in memory)&lt;BR /&gt;for (i in 1L:r$nrow)&lt;/P&gt;&lt;P&gt;{&lt;/P&gt;&lt;P&gt;v &amp;lt;- r$pixel_block(ul_y = i - 1L, nrow = 1L)&lt;/P&gt;&lt;P&gt;r2$write_pixel_block(v * 1.5, ul_y = i - 1L, nrow = 1L, ncol = r$ncol)&lt;/P&gt;&lt;P&gt;}&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;#Write the Predesignated Raster File to Disk&lt;/P&gt;&lt;PRE&gt;r2$commit()&lt;/PRE&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 23 Oct 2018 00:04:21 GMT</pubDate>
      <guid>https://community.esri.com/t5/r-arcgis-questions/arc-write-dataframe-chunking/m-p/804077#M146</guid>
      <dc:creator>Anonymous User</dc:creator>
      <dc:date>2018-10-23T00:04:21Z</dc:date>
    </item>
  </channel>
</rss>

