I have used many times the same process of rasterization, which works fairly well:
raster <- rasterize(vect(shapefile.shp), base_grid, "my_variable")
where raster is the rasterized shapefile, shapefile.shp is the original vector, base_grid is the raster skeleton and "my variable" is the variable to be considered. For variables that are not related to the area of the polygon, this approach is satisfactory, as it uses mean calculations to rearrange the data (for instance: population, production, yield, temperature, precipitation).
However, now I am trying to convert from vector to raster polygons that have as variable harvesting area, which is not the area of the polygon strictly, but that could be considered proportional to the total polygon area. The approach above produces inflated values when considering the harvesting area (3-4 times the corresponding vector), probably because the polygons are in general larger than the grid cells. So a large polygon with 100 unit is divided in 10 grid cells, and each will give 100 units, whereas I want them to have 10 units each (because it is an area).
I suppose the approach I have works like this: "In each grid cell, weight average all polygons values proportionally to their presence in the grid cell"
But what I am looking for is: "For each polygon part in each grid cell, calculate the fraction of the polygon inside the grid cell (wrt to the total polygon area) and SUM all values inside the grid cell (since it's an area unit)".
Any help is highly appreciated.
UPDATE:
view of the vector data. The rasters are actually manyfold because I have multiple years:
Simple feature collection with 9382 features and 3 fields
Geometry type: MULTIPOLYGON
Dimension: XYZ
Bounding box: xmin: -67.38379 ymin: -41.03791 xmax: -53.63737 ymax: -21.99877
z_range: zmin: 0 zmax: 0
Geodetic CRS: WGS 84
First 10 features:
ADM2_REF anio my_variable geometry
2 Tres Arroyos 1978 180 MULTIPOLYGON Z (((-60.16947...
3 Tres Arroyos 1979 0 MULTIPOLYGON Z (((-60.16947...
4 Tres Arroyos 1988 1000 MULTIPOLYGON Z (((-60.16947...
5 Tres Arroyos 1989 1000 MULTIPOLYGON Z (((-60.16947...
6 Tres Arroyos 1990 3000 MULTIPOLYGON Z (((-60.16947...
7 Tres Arroyos 1991 1500 MULTIPOLYGON Z (((-60.16947...
8 Tres Arroyos 1992 2800 MULTIPOLYGON Z (((-60.16947...
9 Tres Arroyos 1993 2800 MULTIPOLYGON Z (((-60.16947...
10 Tres Arroyos 1994 2500 MULTIPOLYGON Z (((-60.16947...
11 Tres Arroyos 1995 1250 MULTIPOLYGON Z (((-60.16947...
The steps to convert the dataframe above to a raster are:
baserast <- rast(nrows=nrows, ncol=nrows,
extent= extent,
crs=" proj=longlat datum=WGS84",
vals=NA)
rasters <- rast(lapply(1978:2019,
function(x)
rasterize(vect(shp.soy.yld.arg %>%
filter(anio==x)), baserast, "my variable")))
Link for one year of the data .gpkg (it was too large for all years).
CodePudding user response:
In cases like that you should rasterize density instead of area
Example data:
library(terra)
f <- system.file("ex/lux.shp", package="terra")
v <- vect(f)
# all areas have 100 ha of the crop
v$crop_area <- 100
r <- rast(v, res=.01)
Compute density
e <- expanse(v, unit="ha") # or "km" to avoid small numbers
v$density <- v$crop_area / e
Rasterize
x <- rasterize(v, r, "density")
Back to area
ra <- cellSize(r, unit="ha")
area <- x * ra
Check that the numbers are reasonable (error should be smallest for large areas / small cells). The expected value for each polygon is 100.
extract(area, v, sum, na.rm=TRUE, exact=TRUE) |> round()
# ID density
# [1,] 1 99
# [2,] 2 101
# [3,] 3 99
# [4,] 4 95
# [5,] 5 99
# [6,] 6 98
# [7,] 7 96
# [8,] 8 99
# [9,] 9 98
#[10,] 10 98
#[11,] 11 101
#[12,] 12 100
Below I show how to do this in a loop over multiple years
Example data:
library(terra)
f <- system.file("ex/lux.shp", package="terra")
v <- vect(f)
# all areas have 100 ha of the crop
v$crop_area <- 100
v$year <- rep(c(1990,1991, 1992), each=4)
r <- rast(v, res=.01)
Solution
ra <- cellSize(r, unit="ha")
e <- expanse(v, unit="ha")
v$density <- v$crop_area / e
years <- unique(v$year)
out <- list()
for (i in 1:length(years)) {
vv <- v[v$year == years[i], ]
x <- rasterize(vv, r, "density")
out[[i]] <- x * ra
}
out <- rast(out)
names(out) <- years
out
#class : SpatRaster
#dimensions : 73, 78, 3 (nrow, ncol, nlyr)
#resolution : 0.01, 0.01 (x, y)
#extent : 5.74414, 6.52414, 49.44781, 50.17781 (xmin, xmax, ymin, ymax)
#coord. ref. : lon/lat WGS 84 (EPSG:4326)
#sources : memory
# memory
# memory
#names : 1990, 1991, 1992
#min values : 0.2544571, 0.3028134, 0.3200223
#max values : 1.0492719, 0.6249076, 0.4335355
CodePudding user response:
If I understand your question well (a reproducible example would have been appreciated), you want that all pixels in the rasterized polygons sum up to the harvested values ("my_variable" in your code).
Here I create a toy example to show you my reasoning:
first load the libraries
create toy data with an example total and harvested area
calculate the fraction of each pixel covered by the polygon
divide each cover fraction by the total area of the polygon and multiply it by the harvested area
library(sf) library(raster) library(exactextractr) rast <- raster::raster(matrix(rep(1,100), ncol=10), xmn=0, ymn=0, xmx=10, ymx=10) pol <- sf::st_sfc(sf::st_polygon(list(cbind(c(0.5,4,7,0.5),c(1,0,4,1))))) pol <- st_sf(data.frame(area = st_area(pol),harvest=0.7, geom=pol)) raster::plot(rast) raster::plot(pol,add=T) cov_frac <- exactextractr::coverage_fraction(rast, pol)[[1]] raster::plot(cov_frac) raster::plot(pol,add=T) result <- cov_frac/st_area(pol)*pol$harvest sum(values(result))
As you can see the sum of all pixels in the rasterized polygon is equal to the harvested area