HC Teo
14 July 2019
Pansharpening Sentinel-2
Various methods have been suggested to pansharpen the Sentinel-2 infrared bands (Gasparovic & Jogun, 2018; Selva et al., 2014; Wang et al., 2016); a study evaluated these and concluded that despite the plethora of methods, using the average value of the 10m bands (B2-4, B8) as a panchromatic band was not only the simplest but also most effective (Kaplan et al., 2018).
In Google Earth Engine (GEE), only HSV pansharpening is available as a built-in script. HSV pansharpening transforms an RGB image into HSV colour space then back again - this means that it can only be used on exactly three bands. What if we want to pansharpen more than three bands?
Desktop software such as Erdas Imagine offer other pansharpening algorithms, such as the commonly used high-pass filter (HPF) resolution merge, which supports pansharpening multiple bands. No one has done a GEE implementation of HPF resolution merge pansharpening yet, so we have attempted one.
Step 1. Cloud masking
We use the Sentinel-2 QA60 quality assurance band to conduct cloud masking (code source: github). In some areas the QA60 band does not appear to produce very good results. We attempted supposedly more advanced cloud masking methods, e.g. fmask (Frantz et al., 2018) but these performed worse than QA60 in the areas we attempted to map in Peninsular Malaysia. Probably better and easier to stick with QA60.
var boundary = AOI;
//----------------------------------
Map.centerObject(boundary, 10);
// 1. Cloud Masking
function maskS2clouds(image) {
var qa = image.select('QA60');
// Bits 10 and 11 are clouds and cirrus, respectively.
var cloudBitMask = ee.Number(2).pow(10).int();
var cirrusBitMask = ee.Number(2).pow(11).int();
// Both flags should be set to zero, indicating clear conditions.
var mask = qa.bitwiseAnd(cloudBitMask).eq(0).and(
qa.bitwiseAnd(cirrusBitMask).eq(0));
// Return the masked and scaled data, without the QA bands.
return image.updateMask(mask).divide(10000)
.select("B.*")
.copyProperties(image, ["system:time_start"]);
}
Steps 2-4. Load Sentinel-2 data and create image
Load Sentinel-2 TOA reflectance data and reduce the collection into one image. Use the average of the 10m bands to create a panchromatic band that will later be used to pansharpen the 20m bands. Create two images - one with the 10m bands and one with the 20m bands.
// 2. Load Sentinel-2 TOA reflectance data.
var collection = ee.ImageCollection('COPERNICUS/S2')
.filterBounds(boundary)
.filterDate('2016-06-01', '2019-01-30')
// Pre-filter to get less cloudy granules.
.filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE', 20))
.map(maskS2clouds);
//3. collection reduce into one image
var image = collection.median();
var image = image.clip(boundary).multiply(100000).uint16();
//4. Get average of 10m bands
var panchro = (image.select('B2').add(image.select('B3'))
.add(image.select('B4')).add(image.select('B8')))
.divide(4);
var panchro = panchro.reproject({crs:'EPSG:32648',crsTransform:[10,0,300000,0,-10,300000]});
//4b. Create image with 10m bands
var tenmbands = image.select(['B2','B3','B4','B8']);
var tenmbands = tenmbands.reproject({crs:'EPSG:32648',crsTransform:[10,0,300000,0,-10,300000]});
//4c. Create image with 20m bands
var twentymbands = image.select(['B5','B6','B7','B8A','B11','B12']);
var twentymbands = twentymbands.reproject({crs:'EPSG:32648',crsTransform:[20,0,300000,0,-20,300000]});
Step 5. Pansharpen IR bands
Here comes the exciting part! We referred to Erdas Imagine’s documentation on HPF resolution merge while building this code. Watch out for the parameters in 5b - we think these will work well as defaults for Sentinel-2, but may have to be changed for other applications.
This code could certainly be made more elegant and versatile for other applications (e.g. sensors with different bands).
//5. Pansharpen IR bands
//5a. Resample MS image to pan res pixel size using Bilinear.
//Resulting HPF image will, therefore, have the same pixel size as the high resolution image.
var resampledMS = twentymbands.resample('bilinear').reproject({
crs: twentymbands.projection().crs(),
scale: 10
});
//5b. Define parameters and run high pass filter
//ratio of MS res to pan res. For Sentinel it is 20/10=2.
var ratio = 2;
//kernel size set to 5.
var kernel_size = 5;
//modulating factor set to 0.25
var mod = 0.25;
// Create a list of weights for a 5x5 kernel.
var list = [-1, -1, -1, -1, -1];
// The center of the kernel is 24.
var centerList = [-1, -1, 24, -1, -1];
// Assemble a list of lists: the 5x5 kernel weights as a 2-D matrix.
var lists = [list, list, centerList, list, list];
// Create the kernel from the weights.
var kernel = ee.Kernel.fixed(5, 5, lists, -3, -3, false);
// High pass filter the image by convolving with the kernel.
var hpf = panchro.convolve(kernel);
//5c. Add HPF image to MSx image
// Get stdev for each band
var MSstdev = ee.Number(resampledMS.reduceRegion({
reducer: ee.Reducer.stdDev(),
geometry: AOI,
bestEffort: true,
}));
var hpfstdev = ee.Number(hpf.reduceRegion({
reducer: ee.Reducer.stdDev(),
geometry: AOI,
bestEffort: true,
}));
//set up calculation W = (SD(MS) / SD(HPF) x M) and Pixel (out) = [Pixel (in)] + [HPF x W]
var MSstdev = ee.Dictionary(MSstdev).values(['B5','B6','B7','B8A','B11','B12']);
var hpfstdev = ee.Image.constant(ee.List(ee.Dictionary(hpfstdev).values()).get(0));
var calcB5 = hpf.multiply(ee.Number(ee.List(MSstdev).get(0))).divide(hpfstdev).multiply(mod);
var calcB6 = hpf.multiply(ee.Number(ee.List(MSstdev).get(1))).divide(hpfstdev).multiply(mod);
var calcB7 = hpf.multiply(ee.Number(ee.List(MSstdev).get(2))).divide(hpfstdev).multiply(mod);
var calcB8A = hpf.multiply(ee.Number(ee.List(MSstdev).get(3))).divide(hpfstdev).multiply(mod);
var calcB11 = hpf.multiply(ee.Number(ee.List(MSstdev).get(4))).divide(hpfstdev).multiply(mod);
var calcB12 = hpf.multiply(ee.Number(ee.List(MSstdev).get(5))).divide(hpfstdev).multiply(mod);
//run calculation
var MSoutput = resampledMS.select('B5').add(calcB5);
var MSoutput = MSoutput.addBands(resampledMS.select('B6').add(calcB6))
.addBands(resampledMS.select('B7').add(calcB7))
.addBands(resampledMS.select('B8A').add(calcB8A))
.addBands(resampledMS.select('B11').add(calcB11))
.addBands(resampledMS.select('B12').add(calcB12));
var MSoutput = MSoutput.uint16();
//5d. Linear stretch
var MSoutput_avg = ee.Number(MSoutput.reduceRegion({
reducer: ee.Reducer.stdDev(),
geometry: AOI,
bestEffort: true,
}));
var MSoutput_avg = ee.Dictionary(MSoutput_avg).values(['B5','B6','B7','B8A','B11','B12']);
var MSoutput_avg = ee.Image.constant(ee.List(MSoutput_avg).get(0))
.addBands(ee.Image.constant(ee.List(MSoutput_avg).get(1)))
.addBands(ee.Image.constant(ee.List(MSoutput_avg).get(2)))
.addBands(ee.Image.constant(ee.List(MSoutput_avg).get(3)))
.addBands(ee.Image.constant(ee.List(MSoutput_avg).get(4)))
.addBands(ee.Image.constant(ee.List(MSoutput_avg).get(5)));
var MSoutput_avg = MSoutput_avg.select([0,1,2,3,4,5],['B5','B6','B7','B8A','B11','B12']);
var MSoutput_sd = ee.Number(MSoutput.reduceRegion({
reducer: ee.Reducer.stdDev(),
geometry: AOI,
bestEffort: true,
}));
var MSoutput_sd = ee.Dictionary(MSoutput_sd).values(['B5','B6','B7','B8A','B11','B12']);
var MSoutput_sd = ee.Image.constant(ee.List(MSoutput_sd).get(0))
.addBands(ee.Image.constant(ee.List(MSoutput_sd).get(1)))
.addBands(ee.Image.constant(ee.List(MSoutput_sd).get(2)))
.addBands(ee.Image.constant(ee.List(MSoutput_sd).get(3)))
.addBands(ee.Image.constant(ee.List(MSoutput_sd).get(4)))
.addBands(ee.Image.constant(ee.List(MSoutput_sd).get(5)));
var MSoutput_sd = MSoutput_sd.select([0,1,2,3,4,5],['B5','B6','B7','B8A','B11','B12']);
var twenty_avg = ee.Number(twentymbands.reduceRegion({
reducer: ee.Reducer.mean(),
geometry: AOI,
bestEffort: true,
}));
var twenty_avg = ee.Dictionary(twenty_avg).values(['B5','B6','B7','B8A','B11','B12']);
var twenty_avg = ee.Image.constant(ee.List(twenty_avg).get(0))
.addBands(ee.Image.constant(ee.List(twenty_avg).get(1)))
.addBands(ee.Image.constant(ee.List(twenty_avg).get(2)))
.addBands(ee.Image.constant(ee.List(twenty_avg).get(3)))
.addBands(ee.Image.constant(ee.List(twenty_avg).get(4)))
.addBands(ee.Image.constant(ee.List(twenty_avg).get(5)));
var twenty_avg = twenty_avg.select([0,1,2,3,4,5],['B5','B6','B7','B8A','B11','B12']);
var twenty_sd = ee.Number(twentymbands.reduceRegion({
reducer: ee.Reducer.stdDev(),
geometry: AOI,
bestEffort: true,
}));
var twenty_sd = ee.Dictionary(twenty_sd).values(['B5','B6','B7','B8A','B11','B12']);
var twenty_sd = ee.Image.constant(ee.List(twenty_sd).get(0))
.addBands(ee.Image.constant(ee.List(twenty_sd).get(1)))
.addBands(ee.Image.constant(ee.List(twenty_sd).get(2)))
.addBands(ee.Image.constant(ee.List(twenty_sd).get(3)))
.addBands(ee.Image.constant(ee.List(twenty_sd).get(4)))
.addBands(ee.Image.constant(ee.List(twenty_sd).get(5)));
var twenty_sd = twenty_sd.select([0,1,2,3,4,5],['B5','B6','B7','B8A','B11','B12']);
var MSoutput = (MSoutput.subtract(MSoutput_avg)).divide(MSoutput_sd)
.multiply(twenty_sd).add(twenty_avg).uint16();
Step 6. Adding indices
Using the IR bands we can generate some commonly-used indices and stack them to the image as bands.
//6. Create indices and rescale 0 to 40000 (set rescale value at half of this)
var rescale = 20000;
var ndvi = image.normalizedDifference(['B8A', 'B4']).multiply(rescale).add(rescale).uint16().rename('NDVI');
var ndwi = image.normalizedDifference(['B8A', 'B11']).multiply(rescale).add(rescale).uint16().rename('NDWI');
var ndbi = image.normalizedDifference(['B11', 'B8']).multiply(rescale).add(rescale).uint16().rename('NDBI');
var bsi = (((image.select('B11').add(image.select('B4')))
.subtract(image.select('B8').add(image.select('B2'))))
.divide(((image.select('B11').add(image.select('B4')))
.add(image.select('B8').add(image.select('B2')))))).multiply(rescale).add(rescale).uint16().rename('BSI');
//7. Stack and clip
var output = tenmbands.addBands(MSoutput).addBands(ndvi).addBands(ndwi)
.addBands(ndbi).addBands(bsi);
var output = output.clip(boundary);
print(output);
Step 8. Random forest classifier
This code runs a random forest classifier on the stacked image.
//8. Run random forest classifier
// Load training polygons from a Fusion Table.
// The 'class' property stores known class labels.
var polygons = ee.FeatureCollection(traininga);
// Get the values for all pixels in each polygon in the training.
var training = output.sampleRegions({
// Get the sample from the polygons FeatureCollection.
collection: polygons,
// Keep this list of properties from the polygons.
properties: ['class'],
// Set the scale to get Sentinel pixels in the polygons.
scale: 10
});
// Select bands to be used for training.
var bands = ['B2', 'B3', 'B4', 'B8', 'B5', 'B6', 'B7', 'B8A', 'B11', 'B12'
, 'NDVI','NDWI','NDBI','BSI'];
// Create RF classifier and train the classifier.
var trained = ee.Classifier.randomForest().train(training, 'class', bands);
// Classify the image.
var classified = output.classify(trained);
Step 9. Export
Note that if the tiles are too large, GEE may not be able to display the image but it may still be possible to export it.
//9. Export image and classification.
Export.image.toDrive({
image: output,
description: 'output',
scale: 10,
fileFormat: 'GeoTIFF',
maxPixels: 10000000000,
});
Export.image.toDrive({
image: classified,
description: 'classified',
scale: 10,
fileFormat: 'GeoTIFF',
maxPixels: 10000000000,
});
References
Frantz, D., Haß, E., Uhl, A., Stoffels, J., & Hill, J. (2018). Improvement of the Fmask algorithm for Sentinel-2 images: Separating clouds from bright surfaces based on parallax effects. Remote Sensing of Environment, 215, 471-481. https://doi.org/10.1016/j.rse.2018.04.046
Gasparovic, M., & Jogun, T. (2018). The effect of fusing Sentinel-2 bands on land-cover classification. International Journal of Remote Sensing, 39(3), 822-841. https://doi.org/10.1080/01431161.2017.1392640
Selva, M., Aiazzi, B., Butera, F., Chiarantini, L., & Baronti, S. (2014). Hyper-sharpening of hyperspectral data: A first approach. In 2014 6th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS) (pp. 1-4). IEEE. https://doi.org/10.1109/WHISPERS.2014.8077543
Wang, Q., Shi, W., Li, Z., & Atkinson, P. M. (2016). Fusion of Sentinel-2 images. Remote Sensing of Environment, 187, 241-252. https://doi.org/10.1016/j.rse.2016.10.030
Kaplan, G. (2018). Sentinel-2 Pan Sharpening-Comparative Analysis. Proceedings, 2(7), 345. https://doi.org/10.3390/ecrs-2-05158
Comments