I use nodejs s3 package "aws-sdk" It works fine when I use serverless-offline run on my mac. The s3.getSignedUrl and s3.listObjects function both work fine.
But when I run my deployed app, the s3.getSignedUrl works fine but the s3.listObjects not. I got this error in CloudWatch:
In CloudWatch > Log groups > /aws/lambda/mamahealth-api-stage-userFilesIndex:
2021-12-24T02:49:50.965Z 421e054e-d1bf-429a-b73c-402ad21c7bae ERROR AccessDenied: Access Denied
at Request.extractError (/var/task/node_modules/aws-sdk/lib/services/s3.js:714:35)
at Request.callListeners (/var/task/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
at Request.emit (/var/task/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
at Request.emit (/var/task/node_modules/aws-sdk/lib/request.js:688:14)
at Request.transition (/var/task/node_modules/aws-sdk/lib/request.js:22:10)
at AcceptorStateMachine.runTo (/var/task/node_modules/aws-sdk/lib/state_machine.js:14:12)
at /var/task/node_modules/aws-sdk/lib/state_machine.js:26:10
at Request.<anonymous> (/var/task/node_modules/aws-sdk/lib/request.js:38:9)
at Request.<anonymous> (/var/task/node_modules/aws-sdk/lib/request.js:690:12)
at Request.callListeners (/var/task/node_modules/aws-sdk/lib/sequential_executor.js:116:18) {
code: 'AccessDenied',
region: 'ap-northeast-1',
time: 2021-12-24T02:49:50.960Z,
requestId: 'Q8B79GKAHPHMH3DN',
extendedRequestId: 'Nhx4ekCzotCSjGXGssFl0lQtyrWf01Gf8416FaqBALA07g3qm31avCIErDPcJWaJt 90xNz8w0o=',
cfId: undefined,
statusCode: 403,
retryable: false,
retryDelay: 43.44595425080651
}
It looks like my aws s3 has permission problem.
My aws-sdk version is 2.995.0
My helpers/s3.ts code:
import stream from 'stream';
import { nanoid } from 'nanoid';
import axios from 'axios';
import AWS from 'aws-sdk';
import mime from 'mime-types';
import moment from 'moment-timezone';
AWS.config.update({
region: 'ap-northeast-1',
});
const s3 = new AWS.S3();
export const uploadFromStream = (key: string, fileExt: string) => {
const pass = new stream.PassThrough();
return {
writeStream: pass,
promise: s3
.upload({
Bucket: process.env.AWS_BUCKET_NAME!,
Key: key,
Body: pass,
ContentType: mime.lookup(fileExt) || undefined,
})
.promise(),
};
};
type S3FileData = {
lastModified: number;
id: string;
fileExt: string;
size: number;
};
export const listObjects = async (s3Folder: string): Promise<S3FileData[]> => {
const params = {
Bucket: process.env.AWS_BUCKET_NAME!,
Delimiter: '/',
Prefix: `${s3Folder}/`,
};
const data = await s3.listObjects(params).promise();
if (!data.Contents) return [];
const fileList: S3FileData[] = [];
for (let index = 0; index < data.Contents.length; index = 1) {
const content = data.Contents[index];
const { Size: size } = content;
const splitedKey: string[] | undefined = content.Key?.split('/');
const lastModified = moment(content.LastModified).unix();
const fileFullName =
(splitedKey && splitedKey[splitedKey.length - 1]) || '';
const fileFullNameSplited = fileFullName.split('.');
if (fileFullNameSplited.length < 2 || !size)
throw Error('no file ext or no size');
const fileExt = fileFullNameSplited.pop() as string;
const id = fileFullNameSplited.join();
fileList.push({ id, fileExt, lastModified, size });
}
return fileList;
};
export const uploadFileFromBuffer = async (
key: string,
fileExt: string,
buffer: Buffer,
) => {
return s3
.upload({
Bucket: process.env.AWS_BUCKET_NAME!,
Key: key,
Body: buffer,
ContentType: mime.lookup(fileExt) || undefined,
})
.promise();
};
export const uploadFileFromNetwork = async (
key: string,
fileExt: string,
readUrl: string,
) => {
const { writeStream, promise } = uploadFromStream(key, fileExt);
const response = await axios({
method: 'get',
url: readUrl,
responseType: 'stream',
});
response.data.pipe(writeStream);
return promise;
};
export enum S3ResourceType {
image = 'image',
report = 'report',
}
export const getSystemGeneratedFileS3Key = (
resourceType: S3ResourceType,
fileExt: string,
id?: string,
): string => {
return `system-generated/${resourceType}/${id || nanoid()}.${fileExt}`;
};
export const getUserUploadedFileS3Key = (
userId: string,
fileExt: string,
id?: string,
) => {
return `user-uploaded/${userId}/${id || nanoid()}.${fileExt}`;
};
export const downloadFile = async (key: string) => {
const params: AWS.S3.GetObjectRequest = {
Bucket: process.env.AWS_BUCKET_NAME!,
Key: key,
};
const { Body } = await s3.getObject(params).promise();
return Body;
};
export const deleteFile = (key: string) => {
const params: AWS.S3.DeleteObjectRequest = {
Bucket: process.env.AWS_BUCKET_NAME!,
Key: key,
};
return s3.deleteObject(params).promise();
};
export enum GetSignedUrlOperation {
getObject = 'getObject',
putObject = 'putObject',
}
// Change this value to adjust the signed URL's expiration
const URL_EXPIRATION_SECONDS = 300;
export type GetSignedUrlOptions = {
contentType: string;
};
/**
* getSignedUrl
* @param key s3 key
* @param putOptions If provide putOptions, will return upload file url.
* @param expirationSeconds Default expiration seconds is 300
* @returns Signed Url
*/
export const getSignedUrl = (
key: string,
putOptions?: GetSignedUrlOptions,
expirationSeconds?: number,
) => {
const contentType = putOptions?.contentType;
const operation = putOptions
? GetSignedUrlOperation.putObject
: GetSignedUrlOperation.getObject;
return s3.getSignedUrl(operation, {
Bucket: process.env.AWS_BUCKET_NAME,
Key: key,
Expires: expirationSeconds || URL_EXPIRATION_SECONDS,
ContentType: contentType,
});
};
Here are my s3 bucket setting:
S3MasterResourceBucket:
Type: AWS::S3::Bucket
Properties:
AccelerateConfiguration:
AccelerationStatus: Suspended
BucketEncryption:
ServerSideEncryptionConfiguration:
- ServerSideEncryptionByDefault:
SSEAlgorithm: AES256
PublicAccessBlockConfiguration:
BlockPublicAcls: TRUE
BlockPublicPolicy: TRUE
IgnorePublicAcls: TRUE
RestrictPublicBuckets: TRUE
VersioningConfiguration:
Status: Enabled
My iam settings in serverless.yaml:
provider:
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:Query
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
- xray:PutTraceSegments
- xray:PutTelemetryRecords
- cognito-idp:AdminAddUserToGroup
- cognito-idp:AdminUpdateUserAttributes
- cognito-idp:AdminInitiateAuth
- cognito-idp:AdminGetUser
- s3:PutObject
- s3:GetObject
- s3:DeleteObject
- s3:ListBucket
- sqs:SendMessage
Resource:
- "Fn::ImportValue": mamahealth-api-${self:provider.stage}-ImagePostProcessQueueArn
- "Fn::ImportValue": mamahealth-api-${self:provider.stage}-CognitoUserPoolMyUserPoolArn
- "Fn::ImportValue": mamahealth-api-${self:provider.stage}-CognitoUserPoolMyUserPoolArn2
- "Fn::ImportValue": mamahealth-api-${self:provider.stage}-DynamoDBMasterTableArn
- "Fn::Join":
- "/"
- - "Fn::ImportValue": mamahealth-api-${self:provider.stage}-DynamoDBMasterTableArn
- "index"
- "*"
- "Fn::Join":
- "/"
- - "Fn::ImportValue": mamahealth-api-${self:provider.stage}-S3MasterResourceBucketArn
- "*"
I saw Ermiya Eskandary's comment in this question: Amazon S3 getObject() receives access denied with NodeJS Then I check my configure below:
- The file exists
Yes, because this code can return data when I use serverless-offline, but when I run deployed app this throw 403 error.
const data = await s3.listObjects(params).promise();
- Use the correct key and bucket name in the correct region.
Yes, the key and bucket names and regions are all correct.
- with the correct access key and secret access key for the user with permissions?
In my mac, I used this command:
aws configure
Then I entered my team account's Access Key ID and secret correctly.
- The roles assigned to the user.
The role is "AdministratorAccess"
CodePudding user response:
In the last line of your IAM role, you grant permissions the lambda function to perform s3:PutObject
, s3:GetObject
, s3:DeleteObject
and s3:ListBucket
on the S3MasterResourceBucketArn/*
.
I believe that the first 3 actions and the last one have different resource requirements. For the first 3 (PutObject, GetObject, and DeleteObject) the resource name is correct. For the last one (ListBucket) I believe it must be the arn of the bucket without the star at the end (``S3MasterResourceBucketArn`).
As a good practice, you should split your policy into multiple statements, like:
provider:
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:Query
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource:
- "Fn::ImportValue": mamahealth-api-${self:provider.stage}-DynamoDBMasterTableArn
- "Fn::Join":
- "/"
- - "Fn::ImportValue": mamahealth-api-${self:provider.stage}-DynamoDBMasterTableArn
- "index"
- "*"
- Effect: Allow
Action:
- cognito-idp:AdminAddUserToGroup
- cognito-idp:AdminUpdateUserAttributes
- cognito-idp:AdminInitiateAuth
- cognito-idp:AdminGetUser
Resource:
- "Fn::ImportValue": mamahealth-api-${self:provider.stage}-CognitoUserPoolMyUserPoolArn
- "Fn::ImportValue": mamahealth-api-${self:provider.stage}-CognitoUserPoolMyUserPoolArn2
- Effect: Allow
Action:
- sqs:SendMessage
Resource:
- "Fn::ImportValue": mamahealth-api-${self:provider.stage}-ImagePostProcessQueueArn
- Effect: Allow
Action:
- s3:ListBucket
Resource:
- Fn::ImportValue": mamahealth-api-${self:provider.stage}-S3MasterResourceBucketArn
- Effect: Allow
Action:
- s3:PutObject
- s3:GetObject
- s3:DeleteObject
Resource:
- "Fn::Join":
- "/"
- - "Fn::ImportValue": mamahealth-api-${self:provider.stage}-S3MasterResourceBucketArn
- "*"
- Effect: Allow
Action:
- xray:PutTraceSegments
- xray:PutTelemetryRecords
Resource:
- "*"