Let suppose a grocery store wanted to update the daily stock of juice cartons that are of different brand refrigerated. This cannot be done manually every day by counting each carton of a different brand. Think when you have a system that helps you in identifying daily stock, here Einstein Object Detection helps the grocery owner. Einstein identifies the brand and boundaries from which we can categorize them. From the data, we can make a count of different brand cartons and generate a report to the grocery owner.
Einstein should identify objects so first, we should train with the dataset. To create a dataset, collect combinations of brand images. Here I have taken 4 brands of juice cartons as Real, Tropicana, Natural, and Nescafe. The images in dataset should be like the permutation of these cartons.
Make a folder saving all these images to it and create a .csv file inside the folder. The .csv should contain height, width, x-axis, and y-axis of each carton. The format in the .csv file should be as specified: Box1 {“height”:494,”Y”:410,”label”:”Tropicana”,”width”:284,”x”:11}. The image name, for example, juice1.jpg should be specified in the first column of the sheet and the corresponding boundaries in the above format. Refer below images to create the .csv file.
In the final folder, all images and .csv file are saved. Zip the folder and upload it to AWS to get downloadable link to train the model. To know now how to upload in AWS and get the link refer (https://blogs.absyz.com/2018/02/13/einstein-vision-real-estate-app/).
Using the link we have to train Einstein
In the Einstein package, modify some lines of code in the Einstein Prediction Service class.
private static String BASE_URL = 'https://api.einstein.ai/v2'; private static String BASE_URL = 'https://api.einstein.ai/v2'; private String PREDICT = BASE_URL + '/vision/detect'; //all through code the url of httpbodypart should be image-detection EinsteinVision_HttpBodyPartDatasetUrl parts = new EinsteinVision_HttpBodyPartDatasetUrl(url,'image-detection');
To get the output with the predicted boundaries in the probability class add this below code.
@AuraEnabled public BoundingBox boundingBox {get; set;} public class BoundingBox { @AuraEnabled public Integer minX {get; set;} @AuraEnabled public Integer minY {get; set;} @AuraEnabled public Integer maxX {get; set;} @AuraEnabled public Integer maxY {get; set;} }
In this apex class, a wrapper is created for the image and record. I have to display the input image to the user hence wrapper is used. The method that returns the predicted value.
@AuraEnabled public static objects__c getPrediction(id objectId,String fileName,String base64) { wrapperClass returnwrapperClass = new wrapperClass (); objects__c obj = new objects__c(); Blob fileBlob = EncodingUtil.base64Decode(base64); EinsteinVision_PredictionService service = new EinsteinVision_PredictionService(); EinsteinVision_Dataset[] datasets = service.getDatasets(); List<ContentDocument> documents = new List<ContentDocument>(); for (EinsteinVision_Dataset dataset : datasets) { if (dataset.Name.equals('juice')) { EinsteinVision_Model[] models = service.getModels(dataset); EinsteinVision_Model model = models.get(0); EinsteinVision_PredictionResult result = service.predictBlob(model.modelId,fileBlob, ''); EinsteinVision_Probability probability = result.probabilities.get(0); string resultedProbablity = ''; Map<string,integer> items = new Map<string,integer>(); for(integer i=0;i<result.probabilities.size();i++){ if(!items.containskey(result.probabilities.get(i).label)){ items.put(result.probabilities.get(i).label,1); } else{ integer count = items.get(result.probabilities.get(i).label); items.put(result.probabilities.get(i).label,count+1); } integer j = i+1; } for(String i : items.keyset()){ resultedProbablity = resultedProbablity +' '+i+' -- '+' '+items.get(i); } obj = [select id,Results__c from objects__c where id =: objectId]; obj.Results__c = resultedProbablity; update obj; returnwrapperClass.objectRecord = obj; ContentVersion contentVersion = new ContentVersion( Title = fileName, PathOnClient = fileName+'.jpg', VersionData = fileBlob, IsMajorVersion = true ); insert contentVersion; documents = [SELECT Id, Title, LatestPublishedVersionId,createddate FROM ContentDocument order by createddate desc]; //create ContentDocumentLink record ContentDocumentLink cdl = New ContentDocumentLink(); cdl.LinkedEntityId = objectId; cdl.ContentDocumentId = documents[0].Id; cdl.shareType = 'V'; insert cdl; } } return obj; }
Here I have created a component that is accessed from mobile using salesforce one app. Firstly we have to send an image to Einstein so we take a photo through mobile. Second, we have to upload the image to Einstein. Finally, we get the output that can be displayed and further processed to create monthly reports on the stock and sales data.
<aura:component implements="force:appHostable,flexipage:availableForAllPageTypes,force:hasRecordId" access="global" controller="EinsteinVision_Admin"> <aura:attribute name="contents" type="object" /> <aura:attribute name="Objectdetection" type="objects__c" /> <aura:attribute name="files" type="Object[]"/> <aura:attribute name="image" type="String" /> <aura:attribute name="recordId" type="Id" /> <aura:attribute name="newPicShow" type="boolean" default="false" /> <aura:attribute name="wrapperList" type="object"/> <lightning:card iconName="standard:event" title="Object Detection"> <aura:set attribute="actions"> <lightning:button class="slds-float_left" variant="brand" label="Upload File" onclick="{! c.handleClick }" /> </aura:set> </lightning:card> <aura:if isTrue="{!v.newPicShow}"> <div style="font-size:20px;"> <h1>Result1 : {!v.Objectdetection.Results__c}</h1> </div> <div class="slds-float_left" style ="height:500px;width:400px"> <img src="{!v.image}"/> </div> </aura:if> <div> <div aura:id="changeIt" class="change"> <div class="slds-m-around--xx-large"> <div role="dialog" tabindex="-1" aria-labelledby="header99" class="slds-modal slds-fade-in-open "> <div class="slds-modal__container"> <div class="slds-modal__header">Upload Files <lightning:buttonIcon class="slds-button slds-modal__close slds-button--icon-inverse" iconName="utility:close" variant="bare" onclick="{!c.closeModal}" alternativeText="Close window." size="medium"/> </div> <div class="slds-modal__content slds-p-around--medium"> <div class=" slds-box"> <div class="slds-grid slds-wrap"> <lightning:input aura:id="fileInput" type="file" name="file" multiple="false" accept="image/*;capture=camera" files="{!v.files}" onchange="{! c.onReadImage }" label="Upload an image:"/> </div> </div> </div> <div class="slds-modal__footer"> </div> </div> </div> </div> </div> </div> </aura:component>
In the controller.js, handling the user input and passing it to apex in return probability with boundaries as resultant.
({ onUploadImage: function(component, file, base64Data) { var action = component.get("c.getPrediction"); var objectId = component.get("v.recordId"); action.setParams({ objectId: objectId, fileName: file.name, base64: base64Data }); action.setCallback(this, function(a) { var state = a.getState(); if (state === 'ERROR') { console.log(a.getError()); } else { component.set("v.Objectdetection", a.getReturnValue()); var cmpTarget1 = component.find('changeIt'); $A.util.addClass(cmpTarget1, 'change'); component.set("v.newPicShow",true); } }); }, onGetImageUrl: function(component, file, base64Data) { var action = component.get("c.getImageUrlFromAttachment"); var catId = component.get("v.recordId"); action.setParams({ objId: objId }); action.setCallback(this, function(a) { var state = a.getState(); if (state === 'ERROR') { console.log(a.getError()); } else { if (!a.getReturnValue()=='') { component.set("v.image", "/servlet/servlet.FileDownload?file=" + a.getReturnValue()); } } }); $A.enqueueAction(action); } })
helper.js
({ onUploadImage: function(component, file, base64Data) { var action = component.get("c.getPrediction"); var objectId = component.get("v.recordId"); action.setParams({ objectId: objectId, fileName: file.name, base64: base64Data }); action.setCallback(this, function(a) { var state = a.getState(); if (state === 'ERROR') { console.log(a.getError()); alert("An error has occurred"); } else { component.set("v.Objectdetection", a.getReturnValue()); var cmpTarget1 = component.find('changeIt'); $A.util.addClass(cmpTarget1, 'change'); component.set("v.newPicShow",true); } }); } })
The final output is tested from the mobile salesforce one app.
In case of any doubts feel free to reach out to us.
Creative!!!!
LikeLiked by 2 people
Awesome
LikeLiked by 2 people
Great Learning!!
LikeLike
Hi Pushmitha,
Amazing article. Is it possible for you to share the training dataset so we could try this out?
Vineet
vkaul@salesforce.com
LikeLike
Hi Pushmitha,
Great post. Is it possible for you to share the dataset so we could try it ou?
Thanks,
Vineet
vkaul@salesforce.com
LikeLike
Please share dataset and how you prepared annotation.csv file format
LikeLike