Lightning · Salesforce · Salesforce Einstein

Einstein Vision – Real Estate App

Let suppose in some real estate website, we search for properties and we get the result related to it. When this comes to salesforces we have Einstein which can help in a large amount of data. Einstein has image classification and object identification as Einstien Vision.

Same way using Einstein Vision a real estate app is built and the scenario is developed where a user can search for a property in Property.com by choosing the type of the house. As the input is given by the user, the related images are displayed to the user. Here we have thousands of images to process it and display it to the user. Where it is difficult manually and can be achieved by some predictive algorithms. In this scenario, Einstein plays a vital role that acts like the human in predicting the images. For more understanding, you can through the trailhead  that provides a managed package that will be useful in this demo (https://trailhead.salesforce.com/en/projects/build-a-cat-rescue-app-that-recognizes-cat-breeds)

Steps to Follow:

  1. AWS
  2. Train Dataset
  3. S3 Link

1. AWS storage is used in two ways. First, storage of images in huge amount inside salesforce is difficult. Hence AWS storage is used where there is no limit to the amount of data to be stored. Second, training Einstein is done with a downloadable zip link in an URL format. In AWS we create a bucket in an S3 storage type where the files are stored in the bucket. Here I have created a zip folder and a common folder. The zip folder is to train the datasets that should be of more than 12MB. The more you add the images, the more Einstein prediction will be accurate. One main thing when you create the files inside AWS is to make the access public to each and every file.

aws

Now the link is ready to train the dataset (https://s3.amazonaws.com/sfdc-einstein-demo/newmodifiedhouses2.zip). The zip folder contains sub-folders as shown below,

subfolder

2. For Einstein, the sub-folder name is a Label and images inside sub-folders are datasets. The created data should be trained by passing the link. After training the data they form models for each dataset labels.

einsteinVision

We pass the URL to apex class on click of the button create a dataset. The file is downloaded to meta-minds where Einstein processes the analysis. To get to know more about meta-minds refer the given link (https://metamind.readme.io/docs/introduction-to-the-einstein-predictive-vision-service).

//method1 in awsFileTest.apex
 @AuraEnabled
 public static void createDatasetFromUrl(String zipUrl) {
 EinsteinVision_PredictionService service = new EinsteinVision_PredictionService();
 service.createDatasetFromUrlAsync(zipUrl);
 system.debug(service);
 }
 

On refreshing the dataset we get a list of labels and the number of files that we give to train Einstein.

//method2 in awsFileTest.apex
@AuraEnabled public static List<EinsteinVision_Dataset> getDatasets() {
EinsteinVision_PredictionService service = new EinsteinVision_PredictionService();
EinsteinVision_Dataset[] datasets = service.getDatasets();
return datasets;
}

Einstein identifies with dataset models that are done after training the dataset. We can also delete the trained dataset and add a new dataset.

//method3 in awsFileTest.apex
 @AuraEnabled
 public static String trainDataset(Decimal datasetId) {
 EinsteinVision_PredictionService service = new EinsteinVision_PredictionService();
 EinsteinVision_Model model = service.trainDataset(Long.valueOf(String.valueOf(datasetId)), 'Training', 0, 0, '');
 return model.modelId;
 }
//method4 in awsFileTest.apex
@AuraEnabled
public static void deleteDataset(Long datasetId)
{
EinsteinVision_PredictionService service = new EinsteinVision_PredictionService();
service.deleteDataset(datasetId);
}

On training the dataset, dataset model with an id is generated.

  public static List<EinsteinVision_Model> getModels(Long datasetId) {
        EinsteinVision_PredictionService service = new EinsteinVision_PredictionService();
        EinsteinVision_Model[] models = service.getModels(datasetId);
        return models;
    }

3. We use S3 Link app from appExchange to iterate the filenames inside AWS. S3 Link is basically a link between salesforce and AWS. This app helps us to import and export files from AWS whereas importing file means only the details of the file and provides a redirecting link to view or download images. In callout (to AWS) we can only hardcode the destination file name. We have many files that are not possible to hardcode all files names. To install the app follow the guidelines in the given link. (https://appexchange.salesforce.com/appxListingDetail?listingId=a0N3000000CW1OXEA1)

s3 link

Here I make a call out to the AWS on iterating the image name and receiving it as a blog because Einstein needs the actual(*original) image to compare for finding the probability of all possible type of houses.

@AuraEnabled
    public static list<awsFileTestWrapper.awswrapper> getImageAsBlob() {

        List<NEILON__File__c> fList = [SELECT Name FROM NEILON__File__c];
        system.debug('flist '+fList);
        Map<Blob,String> bList = new Map<Blob,String>();
        for(NEILON__File__c nm:fList)
        {
            Http h = new Http();
            HttpRequest req = new HttpRequest();
            string firstImageURL = 'https://s3.amazonaws.com/sfdc-einstein-demo/commonhouses/'+nm.Name;
            //Replace any spaces with %20
            system.debug('firstImageURL'+firstImageURL);
            firstImageURL = firstImageURL.replace(' ', '%20');
            req.setEndpoint(firstImageURL);
            req.setMethod('GET');
            //If you want to get a PDF file the Content Type would be 'application/pdf'
            req.setHeader('Content-Type', 'image/jpg');
            req.setCompressed(true);
            req.setTimeout(60000);

            HttpResponse res = null;
            res = h.send(req);
            //These next three lines can show you the actual response for dealing with error situations
            string responseValue = '';
            responseValue = res.getStatus();
            system.debug('Response Body for File: ' + responseValue);
            //This is the line that does the magic.  We can get the blob of our file.  This getBodyAsBlob method was added in the Spring 2012 release and version 24 of the API.
            blob image = res.getBodyAsBlob();
            system.debug('blob'+image);
           // bList.add(res.getBodyAsBlob());
            bList.put(res.getBodyAsBlob(),nm.Name);
        }
        system.debug('blob list'+bList);

        EinsteinVision_PredictionService service = new EinsteinVision_PredictionService();
        EinsteinVision_Dataset[] datasets = service.getDatasets();
                    list<awsFileTestWrapper.awswrapper> listaws=new list<awsFileTestWrapper.awswrapper>();

        for (EinsteinVision_Dataset dataset : datasets) {

            EinsteinVision_Model[] models = service.getModels(dataset);
            EinsteinVision_Model model = models.get(0);
          Set<blob>  bList2=bList.keySet();
            for(Blob fileBlob:bList2)
            {
                system.debug('blob in loop '+fileBlob);
                EinsteinVision_PredictionResult result = service.predictBlob(model.modelId,  fileBlob, '');
                EinsteinVision_Probability probability = result.probabilities.get(0);
                system.debug('1.'+result.probabilities.get(0).label+'----'+result.probabilities.get(0).probability+' 2.'+result.probabilities.get(1).label+'----'+result.probabilities.get(1).probability+
                             ' 3.'+result.probabilities.get(2).label+'----'+result.probabilities.get(2).probability
                            +' 4.'+result.probabilities.get(3).label+'----'+result.probabilities.get(3).probability
                            +' 5.'+result.probabilities.get(4).label+'----'+result.probabilities.get(4).probability);
           awsFileTestWrapper.awswrapper aws=new awsFileTestWrapper.awswrapper();
                aws.filename=blist.get(fileblob);
                //aws.content=fileblob;
                aws.mylabel=result.probabilities.get(0).label;
                aws.prob=result.probabilities.get(0).probability;
                listaws.add(aws);
            }
        }
        System.debug('values are'+listaws[0].filename);
      return listaws;
    }

Einstein gives the result of label and probability related to the image. In result.probabilities.get(0).probability it gives the nearest probability to that particular image. I pass the filename, label, and probability to Lightning component controller. Hence, list of wrappers is used.

//awsFileTestWrapper.apex
public class awsFileTestWrapper {
 public class awswrapper{
 @auraenabled public String mylabel;
 @auraenabled public String filename;
 @auraenabled public double prob;
 }
}

In a controller, the callout is made with the iteration of a file name and we are fetching the images from AWS that is displayed to the user.

aura cmp.PNG

The values from apex controller are sent to the javascript.

//controller.js
({
extractfile: function(component, event, helper) {
alert('button clicked');
var val = component.find("select").get("v.value");
alert('value'+val);
var names=[];
var probs=[];
component.set("v.IsSpinner",true);
var action1 = component.get("c.getImageAsBlob");
action1.setCallback(this, function(response) {
var ret=response.getReturnValue();
var name='';
var prob='';
for(var i=0;i<ret.length;i++){
if(ret[i].mylabel==val){
name=ret[i].filename;
names.push(name);
prob=ret[i].prob;
probs.push(prob);
}
}
component.set("v.IsSpinner",false);
component.set("v.contents",names);
component.set("v.probability",probs);
});
$A.enqueueAction(action1);
},
})

The final output results with images and probability to the user as shown below.

output

Feel free to contact us for any doubts and if you need the code which I have given in the screenshot.

References:

  1. https://developer.salesforce.com/blogs/developer-relations/2017/05/image-based-search-einstein-vision-lightning-components.html
  2. https://andyinthecloud.com/2017/02/05/image-recognition-with-the-salesforce-einstein-api-and-an-amazon-echo/
  3. https://metamind.readme.io/docs/prediction-with-image-file

 

 

2 thoughts on “Einstein Vision – Real Estate App

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s