An error occurred (accessdenied) when calling the putobject operation: access denied kms

I have a bucket in ACCOUNT-A which has encryption enabled on it. The federated IAM role in ACCOUNT-A (in which I created the bucket) can upload, copy, delete objects in that BUCKET.

From ACCOUNT-B, using another federated IAM role, I am trying to copy a file to the bucket but it fails.

I have a bucket policy on the BUCKET-IN-ACCOUNT-A. I can list the items of the bucket using ACCOUNT-B but can't copy any file to it.

Do I have to grant some rights to the role in ACCOUNT-B?

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::ACCOUNT-B:root" ] }, "Action": [ "s3:GetBucketLocation", "s3:ListBucket", "s3:PutObject", "s3:PutObjectAcl" ], "Resource": [ "arn:aws:s3:::BUCKET-IN-ACCOUNT-A", "arn:aws:s3:::BUCKET-IN-ACCOUNT-A/*" ] } ] }

I'm running an Amazon EC2 (ubuntu) instance which outputs a JSON file daily. I am now trying to copy this JSON to Amazon S3 so that I can eventually download it to my local machine. Following the instructions here (reading in a file from ubuntu (AWS EC2) on local machine?), I'm using boto3 to copy the JSON from ubuntu to S3:


import boto3
print("This script uploads the SF events JSON to s3")
ACCESS_ID = 'xxxxxxxx'
ACCESS_KEY = 'xxxxxxx'
s3 = boto3.resource('s3',
aws_access_key_id=ACCESS_ID,
aws_secret_access_key= ACCESS_KEY)
def upload_file_to_s3(s3_path, local_path):
bucket = s3_path.split('/')[2]
print(bucket)
file_path = '/'.join(s3_path.split('/')[3:])
print(file_path)
response = s3.Object(bucket, file_path).upload_file(local_path)
print(response)
s3_path = "s3://mybucket/sf_events.json"
local_path = "/home/ubuntu/bandsintown/sf_events.json"
upload_file_to_s3(s3_path, local_path)

The credentials I'm using here are from creating a new user in Amazon Identity and Access Management (IAM): screenshot attached.

An error occurred (accessdenied) when calling the putobject operation: access denied kms

However, when I run this script, I get the following error:


boto3.exceptions.S3UploadFailedError: Failed to upload /home/ubuntu/bandsintown/sf_events.json to mybucket/sf_events.json: An error occurred (AccessDenied) when calling the PutObject operation: Access Denied

I've also tried attaching an IAM role to the EC2 instance and given that role full s3 permissions - but still no luck (see image below).

An error occurred (accessdenied) when calling the putobject operation: access denied kms

An error occurred (accessdenied) when calling the putobject operation: access denied kms

It appears to be a permissions issues - can anyone tell me how I might begin to solve this? Do I need Amazon CLI? I'm also reading in boto3 documentation that I may need an aws_session_token parameter in my script.

Quite simply, I'm lost. Thanks.


This error message indicates that your IAM user or role needs permission for the kms: GenerateDataKey and kms: Decrypt actions. These permissions are required for multipart uploads to a bucket with AWS KMS default encryption. Follow these steps to add permissions for kms: GenerateDataKey and kms:Decrypt:

1.    Open the IAM console.

2.    From the console, open the IAM user or role that you're using to upload files to the Amazon S3 bucket.

3.    In the Permissions tab of your IAM user or role, expand each policy to view its JSON policy document.

4.    In the JSON policy documents, look for policies related to AWS KMS access. Review statements with "Effect": "Allow" to check if the role has permissions for kms: GenerateDataKey and kms: Decrypt on the bucket's AWS KMS key. If these permissions are missing, then add the permissions to the appropriate policy. For instructions, see Adding permissions to a user (console) or Modifying a role permissions policy (console).

5.    In the JSON policy documents, look for statements with "Effect": "Deny". Then, confirm that those statements don't deny the s3:PutObject action on the bucket. The statements must also not deny the IAM user or role access to the kms: GenerateDataKey and kms: Decrypt actions on the key used to encrypt the bucket. Additionally, make sure the necessary KMS and S3 permissions are not restricted using a VPC endpoint policy, service control policy, permissions boundary, or session policy.

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

pwaller opened this issue

Jun 16, 2014

· 26 comments

Assignees

Comments

I have the following policy for my instance role:

{ "Version": "2012-10-17", "Statement": [ { "Action": [ "s3:PutObject" ], "Resource": [ "arn:aws:s3:::foo/bar/*" ], "Effect": "Allow" } ] }

If I try to

aws --region=eu-west-1 s3 cp --acl public-read ./baz s3://mybucket/foo/bar/baz

Then I get:

upload failed: ./baz to s3://mybucket/foo/bar/baz A client error (AccessDenied) occurred when calling the PutObject operation: Access Denied

If I change the policy to allow s3:* rather than just PutObject, the it works. It doesn't work if I add ListObject.

Any ideas?

aws-cli/1.3.4 boto==2.9.6 botocore==0.38.0

I think this might be our bug. I wasn't aware of the need for a PutObjectAcl role. It might be helpful if the documentation said which were needed.

This appears to work:

{ "Version": "2012-10-17", "Statement": [ { "Action": [ "s3:PutObject", "s3:PutObjectAcl" ], "Resource": [ "arn:aws:s3:::foo/bar/*" ], "Effect": "Allow" } ] }

ashishmodak, damonmaria, Cuneytt, meyerbro, vit001, aminariana, jimjkelly, krallin, jweir, lopezoscar-dev, and 42 more reacted with thumbs up emojiJosh-a-e, tony612, meyerbro, aminariana, jweir, jameslnewell, ryandrewjohnson, jothi-pyt, illagrenan, gbabic, and 9 more reacted with heart emoji

Well, I'll reopen this issue for thought because the error message was unhelpful. It could have told me that it was doing a PutObjectAcl or something when it failed.

+1

I had the same problem and I solved it adding PutObjectAcl. The error message isn't helpful.

+1

Thanks for this issue! That solved it for me as well. A better error message would be helpful, though.

I think our best bet here would be to update our documentation. Part of the problem from the CLI side is that we don't actually know why the request failed. The error message we display is take directly from the XML response returned by S3:

<?xml version="1.0" encoding="UTF-8"?> <Error> <Code>AccessDenied</Code> <Message>Access Denied</Message> <RequestId>id</RequestId> <HostId>id</HostId> </Error>

So this could fail because of the missing PutObjectAcl, or could be that the resource you're trying to upload to isn't specified in the "Resource" in your policy. The CLI can't know for sure.

Leaving this open and tagging as documentation so we'll get all the s3 docs updated with the appropriate policies needed.

+1 of PutObjectAcl being the culprit of much pain in my deployment as well

1 similar comment

To summarize, this issue happens when you try to set an ACL on an object via the --acl argument:

Given: "Action": [ "s3:PutObject" ], # This works: $ aws s3 cp /tmp/foo s3://bucket/ # This fails: $ aws s3 cp /tmp/foo s3:/bucket/ --acl public-read upload failed: ../../../../../../../tmp/foo to s3://bucket/foo A client error (AccessDenied) occurred when calling the PutObject operation: Access Denied

Given my previous comment, I'd propose updating the documentation for --acl to mention that you need "s3:PutObjectAcl" set if you're setting this param.

Thoughts? cc @kyleknap @mtdowling @rayluo @JordonPhillips

jamesls added a commit to jamesls/aws-cli that referenced this issue

Oct 21, 2015

Also updated one of the ``s3 cp`` examples using the ``--acl`` option to show an example policy. Closes aws#813.

@jamesls a slightly more discoverable fix would be to say "A client error (AccessDenied) occurred when calling the PutObjectAcl operation", since that would make it clear what's failing and that it's missing from my policy. Otherwise I'll just see the error complaining that it tried to PutObject and bang my head against the wall saying "but I have PutObject in my IAM policy!", without ever noticing that PutObjectAcl isn't there.

Not sure how possible that would be to implement because the actual command we're invoking is is PutObject so that comes directly from the python SDK. We don't have a way of knowing that the command failed because of a missing PutObjectAcl in the policy. We could check if you specified the --acl argument, but the error message we get back is a catch all access denied error that could be caused by a number of issues.

this really caused me some time to debug.

@jamesls I didn't use --acl, but still my command gives error " access denied when calling the put operation".. What could be the reason?

@jamesls I think the error message being generic is fine, but the help to debug is not. There is no mention of ACL or policy problems to guide developers to the right place(s) to check.

@jamesls when I use --exclude "folder/" is not working with nested folders.
For eg. if my filepath is c:/source/f1, and my cmd is --exclude "f1/" working perfectly
But if my path is c:/source/ff/files/temp/f1 then f1 is not getting excluded. Is there any solution for this?

why does "aws cp" cli tool work without the "s3:PutObjectAcl" ?

An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
I am also getting same error while trying the cp command.

Note: the failed call to PutObjectAcl never appears in your CloudTrails

PutObjectTagging could also be the culprit

This still happens. In my case, CodeBuild was telling me that PutObject failed, when really it was trying PutObjectAcl. After an hour of amateurishly digging around, I found out my --acl public-read tag was the culprit. I don't think it was even necessary for the static-web-site S3 bucket which already had bucket-level public read settings.

currently stabbing my eyes out trying to figure this out! lol

Uploading a file really shouldn't be that complicated, yet here we are.

Never fail to amaze me, AWS.

logston, aissar, ayushmpb95, jxh349, jpeeler, sjtindell, leonardocsouza, and skharding reacted with thumbs up emojiayushmpb95, leonardocsouza, and skharding reacted with hooray emojiayushmpb95, teqonix, leonardocsouza, and skharding reacted with heart emoji

Experiencing the same issue

An error occurred (AccessDenied) when calling the PutObject operation: Access Denied: ClientError

Working if i disable default KMS encryption.

My error that lead to the PutObject error was a wrong ARN. I did not need other permissions than PutObject.

I used { "Fn::Join": ["/", [ "arn:aws:s3:::", "${file(./config.${self:provider.stage}.json):ticketBucket}/*" ] ] } which should have been { "Fn::Join": ["", [ "arn:aws:s3:::", "${file(./config.${self:provider.stage}.json):ticketBucket}/*" ] ] } (note the / after Fn::Join).

I encountered a similar issue where including "s3:PutObjectAcl" still did not solve the issue. The issue occurred while using an IAM user belonging to a different AWS account than the S3 Bucket granting access via bucket policy. Changing the Bucket policy to use a Principal role with identical permissions, but belonging to the same AWS Account, solved the issue in this case.

Given: S3 Bucket Policy on AWS Account1: "Principal": { "AWS": [ "arn:aws:iam::###########:user/my-user" # IAM user from AWS Account2 ] }, "Action": [ "s3:PutObject", "s3:PutObjectAcl" ], # This works: $ aws s3 cp /tmp/foo s3://bucket/ # This fails: $ aws s3 cp /tmp/foo s3:/bucket/ --acl public-read upload failed: ../../../../../../../tmp/foo to s3://bucket/foo A client error (AccessDenied) occurred when calling the PutObject operation: Access Denied

Solution: Use an IAM user belonging to the same AWS Account as the S3 Bucket in question.