When we have to upload multiple files or attach many files to any record, Salesforce provides storage limit per user license purchased. It varies from edition to edition. So, sometimes organisations decide to use external storage service like Amazon S3 cloud.
User can be given option to upload files to Amazon S3 via Salesforce and access them using the uploaded URLs. REST protocol is used in this scenario.
Files will be uploaded securely from Salesforce to Amazon server. After create your AWS (Amazon Web Service) user account, login secret and key ID will be shared with you by Amazon. This will be used to login to S3 Cloud from Salesforce.
After logging in to AWS, you can go to console screen and click on S3 under Storage & Content Delivery section.
You can create a bucket where the files will be uploaded.
You can not create folders inside bucket, but a logical folder using ‘/’ slash can be created.
We will see here everything in action:
public void uploadToAmazonS3 (Attachment attach, String folderName) { String filename = folderName+'/' + attach.Name; String attachmentBody = EncodingUtil.base64Encode(attach.Body); String formattedDateString = DateTime.now().formatGMT('EEE, dd MMM yyyy HH:mm:ss z'); String bucketname = //you can write the bucket name where files should be uploaded String host = //aws server base url HttpRequest req = new HttpRequest(); req.setMethod('PUT'); req.setEndpoint('https://' + bucketname + '.' + host + '/' + filename); req.setHeader('Host', bucketname + '.' + host); req.setHeader('Content-Length', String.valueOf(attachmentBody.length())); req.setHeader('Content-Type', attach.ContentType); req.setHeader('Connection', 'keep-alive'); req.setHeader('Date', formattedDateString); req.setHeader('ACL', 'public-read-write'); Blob blobBody = EncodingUtil.base64Decode(attachmentBody); req.setBodyAsBlob(blobBody); }
Create a REST request and set the headers as mentioned.
host can be region specific server ‘s3-ap-southeast-1.amazonaws.com’ or the generic ‘s3.amazonaws.com’.
The request needs to be equipped with proper authentication so that it reaches securely at correct endpoint. To achieve this, Amazon provided login secret and key ID will be used and an authorization string will be created. Authorization string will contain an encrypted signature.
String key = XXXXXXXXXXXXXXXXXXXX String secret = XXXXXXXXXXXXXXXXXXXXXXXXXXXXX String stringToSign = 'PUT\n\n' + attach.ContentType + '\n' + '/' + bucketname + '/' + filename; Blob mac = Crypto.generateMac('HMACSHA1', blob.valueOf(stringToSign), blob.valueof(secret)); String signed = EncodingUtil.base64Encode(mac); String authHeader = 'AWS' + ' ' + key + ':' + signed;
The above authorization string needs to be passed as a header to the http request. And then make the REST send the request.
req.setHeader('Authorization', authHeader); Http http = new Http(); HTTPResponse resp; resp = http.send(req);
The response status code of 200 means a successful upload.
Now, the bucket needs to be configured as a website. The objects (files uploaded) should be made publicly readable, so that the same URL using which the file is uploaded can be used to access the files publicly. To do so you need to write a bucket policy that grants everyone “s3:GetObject” permission.
You can go to http://awspolicygen.s3.amazonaws.com/policygen.html
and create a policy. Follow the below steps to create the policy.
Principal: *
Set the Amazon Resource Name (ARN) to arn:aws:s3:::<bucket_name>/<key_name>
Add your bucket above and <key_name> is set to *.
Click on Add Statement and then Generate Policy. Copy the JSON script generated.
To provide you an example how does the policy looks, I have created a bucket policy script.
{ "Version": "2012-10-17", "Id": "Policy1463490894535", "Statement": [ { "Sid": "Stmt1463490882740", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::bucket_name/*" } ] }
Then open the bucket you created, go to properties. Click on Add Bucket Policy, when the popup opens, paste the script generated and save. This will make the files uploaded in the bucket publicly accessible.
Hi,
I tried exactly the same way but i am getting “400 Error – Bad Request”. What could be the issue ?
Regards,
Aditya.
LikeLike
Hi Aditya,
First thing I would say to check the error code in the response. It can be something like ‘CredentialsNotSupported’, ‘ExpiredToken’, ‘IncompleteBody’ etc. Then based on that, modify your request body or parameter as needed.
For example, if you don’t provide the number of bytes specified by the Content-Length HTTP header, you will 400 bad request with IncompleteBody code.
Thanks
LikeLike
Hi,
i got this type of error:
RESPONSE STRING: System.HttpResponse[Status=Forbidden, StatusCode=403]
AccessDenied
Access DeniedA6F24AA2A978CDF8K7F4xK71TB6xo0/tTcdDEXRMpEMbaM0od0BbfVO7bPAsPRKeZOWVbm/2QQLfOHH5Y5bi0KoRUJk=what i do?
LikeLike
Hi Rishikush,
I am also facing similar response.
Did you get solution to this issue.
LikeLike
Hi,
I am trying to upload a JSON file to amazon s3 bucket. But I am getting an exception as :
Callout Exception: Unexpected end of file from server
My Code:
public class ProductAmazon_RestClass {
public void ProductAmazon_RestMethod(string folderName){
string binaryString = ProductAmazonIntegration.ProductAmazonIntegration();
String key=’***********************’;
String secret=’*********************************************************************’;
String formattedDateString= Datetime.now().formatGMT(‘EEE, dd MMM yyyy HH:mm:ss z’);
String bucketname = ”;
String host = ‘s3-website-us-east-1.amazonaws.com’;
String method = ‘PUT’;
String filename = ‘Product/Product.json’;
//Request starts
HttpRequest req = new HttpRequest();
req.setMethod(method);
req.setEndpoint(‘https://’ + bucketname + ‘.’ + host + ‘/’ + bucketname + ‘/’ + filename);
req.setHeader(‘Host’, bucketname + ‘.’ + host);
req.setTimeout(120000);
req.setHeader(‘Content-Length’, string.valueOf(binaryString.length()));
req.setHeader(‘Content-Encoding’, ‘UTF-8’);
req.setHeader(‘Content-type’, ‘application/json’);
req.setHeader(‘Connection’,’keep-alive’);
req.setHeader(‘Date’, formattedDateString);
req.setHeader(‘ACL’,’public-read’);
req.setBody(binaryString);
String stringToSign = ‘PUT\n\n\n’+formattedDateString+’\n\n/’+bucketname+’/’+filename;
String signed = createSignature(stringToSign,secret);
String authHeader = ‘AWS’ + ‘ ‘ + key + ‘:’ + signed;
req.setHeader(‘Authorization’,authHeader);
Http http = new Http();
try {
//Execute web service call
HTTPResponse res = http.send(req);
System.debug(‘RESPONSE STRING: ‘ + res.toString());
System.debug(‘RESPONSE STATUS: ‘+res.getStatus());
System.debug(‘STATUS_CODE: ‘+res.getStatusCode());
} catch(System.CalloutException e) {
system.debug(‘AWS Service Callout Exception: ‘ + e.getMessage());
}
}
public string createSignature(string canonicalBuffer,String secret){
string sig;
Blob mac = Crypto.generateMac(‘HMACSHA1’, blob.valueof(canonicalBuffer),blob.valueof(secret));
sig = EncodingUtil.base64Encode(mac);
return sig;
}
}
LikeLiked by 1 person
Hi,
Apart from integrating Amazon AWS S3 with Salesforce, we have a requirement to Gzip files before placing them in S3 bucket. Is there any way possible to Gzip TXT files? If yes then only thing I need is to Integrate S3 with Salesforce through RESTful Services.
Thanks in advance,
Shubham
LikeLike
Yes, you can do that setting setcompressed to true on HTTP request. And it is recommended to use header parameter content-encoding as gzip in this case.
LikeLike
Hi Subrat, I am working on displaying Amazon S3 objects in Salesforce. For instance – A folder created in S3 corresponding to Salesforce user and the user is expected to see all files in Salesforce under his folder. Any idea on how to implement this in apex?
LikeLike
Hi, I am getting HTTP version not supported error while hitting the webservice.
LikeLike
Thanks for the solutionl. But I am only being able to upload files upto 12 MB through this code. Apex HttpRequest has a size liit for the request body. how to upload files larger than 12 MB. any solution ? Thanks in advance
LikeLike