Time-Travel with Spark

Time travel operations in Hopsworks Feature Store

In this notebook we will introduce time travel operations in Hopsworks Feature Store (HSFS). Currently HSFS supports Apache Hudi (http://hudi.apache.org/) a storage abstraction/library for doing incremental data ingestion to a Hopsworks Feature Store.

Background

Motivation

Traditional ETL typically involves taking a snapshot of a production database and doing a full load into a data lake (typically stored on a distributed file system). Using the snapshot approach for ETL is simple since the snapshot is immutable and can be loaded as an atomic unit into the data lake. However, the con of taking this approach to doing data ingestion is that it is slow. Even if just a single record have been updated since the last data ingestion, the entire table has to be re-written. If you are working with Big Data (TB or PB size datasets) then this will introduce significant data latency and wasted resources (majority of the writes when ingesting the snapshot is redundant as most of the records have not been updated since the last ETL step).

This motivates the use-case for incremental data ingestion. Incremental data ingestion means that only deltas/changelogs since the last ingestion are inserted. With incremental processing, you process data in mini-batches and run the spark job frequently. The incremental model makes better use of resources and makes it easier to do complex processing and joins.

In addition data is rarely immutable in practice. A bank transaction might be reverted, a customer might change his or her home adress, and a customer review might be updated, to give a few examples. This is where Hudi comes into the picture. Hudi stands for Hadoop Upserts anD Incrementals and brings two new primitives for data engineering on distributed file systems (in addition to append/read):

  • Upsert: the ability to do insertions (appends) and updates efficiently.
  • Incremental reads: the ability to read datasets incrementally using the notion of “commits”.

How Hopsworks Feature Store time travel operations can be used for ML and Feature Pipelines

Hudi is integrated in the Hopsworks Feature Store for doing incremental feature computation and for point-in-time correctness and backfilling of feature data.

Incremental Feature Engineering

spark
Starting Spark application
IDYARN Application IDKindStateSpark UIDriver log
18application_1609813430371_0019sparkidleLinkLink
SparkSession available as 'spark'.
res1: org.apache.spark.sql.SparkSession = org.apache.spark.sql.SparkSession@4ee147e4

Examples

Create HUDI time travel enabled feature group and Bulk Insert Sample Dataset

For this demo we will use small sample of the Agarwal Generator that is a widely used dataset. It contains the hypothetical data of people applying for a loan. Rakesh Agrawal, Tomasz Imielinksi, and Arun Swami, "Database Mining: A Performance Perspective", IEEE Transactions on Knowledge and Data Engineering, 5(6), December 1993. <br/><br/>

For simplicity of demo purposes we will split Agarwal dataset into 3 freature groups and manualy create datasets:
  • economy_fg with customer id, salary, loan, value of house, age of house, commission and type of car features;
  • demographic_fg with customer id, age, education level, zip code,
  • class_fg which will contain labels wether loan was approved class B or rejected class A.

Importing necessary libraries

import com.logicalclocks.hsfs._
import com.logicalclocks.hsfs.constructor._
import scala.collection.JavaConversions._
import collection.JavaConverters._

import org.apache.spark.sql.{ DataFrame, Row }
import org.apache.spark.sql.catalyst.expressions.GenericRow
import org.apache.spark.sql.types._

import java.sql.Date
import java.sql.Timestamp

val connection = HopsworksConnection.builder().build();
val fs = connection.getFeatureStore();
import com.logicalclocks.hsfs._
import com.logicalclocks.hsfs.constructor._
import scala.collection.JavaConversions._
import collection.JavaConverters._
import org.apache.spark.sql.{DataFrame, Row}
import org.apache.spark.sql.catalyst.expressions.GenericRow
import org.apache.spark.sql.types._
import java.sql.Date
import java.sql.Timestamp
connection: com.logicalclocks.hsfs.HopsworksConnection = com.logicalclocks.hsfs.HopsworksConnection@6012fc8c
fs: com.logicalclocks.hsfs.FeatureStore = FeatureStore{id=67, name='demo_fs_meb10000_featurestore', projectId=119, featureGroupApi=com.logicalclocks.hsfs.metadata.FeatureGroupApi@6cfc5ded}
val economyFgSchema = 
 scala.collection.immutable.List(
  StructField("id", IntegerType, true),
  StructField("salary", FloatType, true),
  StructField("commission", FloatType, true),
  StructField("car", StringType, true), 
  StructField("hvalue", FloatType, true),      
  StructField("hyears", IntegerType, true),     
  StructField("loan", FloatType, true),
  StructField("year", IntegerType, true)          
)

val demographicFgSchema = 
 scala.collection.immutable.List(
  StructField("id", IntegerType, true),
  StructField("age", IntegerType, true),
  StructField("elevel", StringType, true),   
  StructField("zipcode", StringType, true) 
)

val classFgSchema = 
 scala.collection.immutable.List(
  StructField("id", IntegerType, true),
  StructField("class", StringType, true),
  StructField("year", IntegerType, true)          
)
economyFgSchema: List[org.apache.spark.sql.types.StructField] = List(StructField(id,IntegerType,true), StructField(salary,FloatType,true), StructField(commission,FloatType,true), StructField(car,StringType,true), StructField(hvalue,FloatType,true), StructField(hyears,IntegerType,true), StructField(loan,FloatType,true), StructField(year,IntegerType,true))
demographicFgSchema: List[org.apache.spark.sql.types.StructField] = List(StructField(id,IntegerType,true), StructField(age,IntegerType,true), StructField(elevel,StringType,true), StructField(zipcode,StringType,true))
classFgSchema: List[org.apache.spark.sql.types.StructField] = List(StructField(id,IntegerType,true), StructField(class,StringType,true), StructField(year,IntegerType,true))

Create spark dataframes for each Feature groups

val economyBulkInsertData = Seq(
    Row(1, 110499.73f, 0.0f,  "car15",  235000.0f, 30, 354724.18f, 2020),
    Row(2, 140893.77f, 0.0f,  "car20",  135000.0f, 2, 395015.33f, 2020),
    Row(3, 119159.65f, 0.0f,  "car1", 145000.0f, 22, 122025.08f, 2020),
    Row(4, 20000.0f, 52593.63f, "car9", 185000.0f, 30, 99629.62f, 2020)
)

val economyBulkInsertDf = spark.createDataFrame(
    spark.sparkContext.parallelize(economyBulkInsertData),
    StructType(economyFgSchema)
)
economyBulkInsertData: Seq[org.apache.spark.sql.Row] = List([1,110499.73,0.0,car15,235000.0,30,354724.2,2020], [2,140893.77,0.0,car20,135000.0,2,395015.34,2020], [3,119159.65,0.0,car1,145000.0,22,122025.08,2020], [4,20000.0,52593.63,car9,185000.0,30,99629.62,2020])
economyBulkInsertDf: org.apache.spark.sql.DataFrame = [id: int, salary: float ... 6 more fields]
economyBulkInsertDf.show()
+---+---------+----------+-----+--------+------+---------+----+
| id|   salary|commission|  car|  hvalue|hyears|     loan|year|
+---+---------+----------+-----+--------+------+---------+----+
|  1|110499.73|       0.0|car15|235000.0|    30| 354724.2|2020|
|  2|140893.77|       0.0|car20|135000.0|     2|395015.34|2020|
|  3|119159.65|       0.0| car1|145000.0|    22|122025.08|2020|
|  4|  20000.0|  52593.63| car9|185000.0|    30| 99629.62|2020|
+---+---------+----------+-----+--------+------+---------+----+
val demographicBulkInsertData = Seq(
    Row(1, 54, "level3", "zipcode5"),
    Row(2, 44, "level4", "zipcode8"),
    Row(3, 49, "level2", "zipcode4"),
    Row(4, 56, "level0", "zipcode2")
)

val demographicBulkInsertDf = spark.createDataFrame(
    spark.sparkContext.parallelize(demographicBulkInsertData),
    StructType(demographicFgSchema)
)
demographicBulkInsertData: Seq[org.apache.spark.sql.Row] = List([1,54,level3,zipcode5], [2,44,level4,zipcode8], [3,49,level2,zipcode4], [4,56,level0,zipcode2])
demographicBulkInsertDf: org.apache.spark.sql.DataFrame = [id: int, age: int ... 2 more fields]
demographicBulkInsertDf.show()
+---+---+------+--------+
| id|age|elevel| zipcode|
+---+---+------+--------+
|  1| 54|level3|zipcode5|
|  2| 44|level4|zipcode8|
|  3| 49|level2|zipcode4|
|  4| 56|level0|zipcode2|
+---+---+------+--------+
val classBulkInsertData = Seq(
    Row(1, "groupB", 2020),
    Row(2, "groupB", 2020),
    Row(3, "groupB", 2020),
    Row(4, "groupB", 2020)
) 

val classBulkInsertDf = spark.createDataFrame(
    spark.sparkContext.parallelize(classBulkInsertData),
    StructType(classFgSchema)
)
classBulkInsertData: Seq[org.apache.spark.sql.Row] = List([1,groupB,2020], [2,groupB,2020], [3,groupB,2020], [4,groupB,2020])
classBulkInsertDf: org.apache.spark.sql.DataFrame = [id: int, class: string ... 1 more field]
classBulkInsertDf.show()
+---+------+----+
| id| class|year|
+---+------+----+
|  1|groupB|2020|
|  2|groupB|2020|
|  3|groupB|2020|
|  4|groupB|2020|
+---+------+----+

Create feature groups

Now we will create each feature group and enable time travel format TimeTravelFormat.HUDI. In Hopsworks Feature Store primary and partition keys are required to be privided for HUDI enabled feature groups.

val economyFg = (fs.createFeatureGroup()
                .name("economy_fg")
                .description("Hudi Household Economy Feature Group")
                .version(1)
                .primaryKeys(Seq("id"))
                .partitionKeys(Seq("year"))
                .hudiPrecombineKey("id") 
                .timeTravelFormat(TimeTravelFormat.HUDI)
                .build())
economyFg: com.logicalclocks.hsfs.FeatureGroup = com.logicalclocks.hsfs.FeatureGroup@fe70f03
val demographyFg = (fs.createFeatureGroup()
                    .name("demography_fg")
                    .description("Hudi Demographic Feature Group")
                    .version(1)
                    .primaryKeys(Seq("id"))
                    .partitionKeys(Seq("zipcode"))
                    .timeTravelFormat(TimeTravelFormat.HUDI)
                    .build())
demographyFg: com.logicalclocks.hsfs.FeatureGroup = com.logicalclocks.hsfs.FeatureGroup@12a910f3
val classFg = (fs.createFeatureGroup()
                .name("class_fg")
                .description("Hudi Class Feature Group")
                .version(1)
                .primaryKeys(Seq("id"))
                .hudiPrecombineKey("year")
                .timeTravelFormat(TimeTravelFormat.HUDI)
                .build())
classFg: com.logicalclocks.hsfs.FeatureGroup = com.logicalclocks.hsfs.FeatureGroup@46b5bec3

Define user provided hudi options

By default, Hudi tends to over-partition input. Recommended shuffle parallelism for hoodie.[insert|upsert|bulkinsert].shuffle.parallelism is atleast input_data_size/500MB

val extra_hudi_options = Map("hoodie.insert.shuffle.parallelism" -> "1", 
    "hoodie.upsert.shuffle.parallelism" -> "1",
    "hoodie.parquet.compression.ratio" -> "0.5")
extra_hudi_options: scala.collection.immutable.Map[String,String] = Map(hoodie.insert.shuffle.parallelism -> 1, hoodie.upsert.shuffle.parallelism -> 1, hoodie.parquet.compression.ratio -> 0.5)

Bulk insert data into the feature group

Since we have not yet saved any data into newly created feature groups we will use Apache hudi terminology and Bulk Insert data. In HSFS its just issuing save method.

economyFg.save(economyBulkInsertDf, extra_hudi_options)
demographyFg.save(demographicBulkInsertDf, extra_hudi_options)
classFg.save(classBulkInsertDf, extra_hudi_options)

Hopsworks Feature Store Commits

If you thoroughly followed this demo you probably noticed that Hopsworks Feature Store uses Apache Hudi as its time travel engine. Hudi introduces the notion of commits which means that it supports certain properties of traditional databases such as single-table transactions, snapshot isolation, atomic upserts and savepoints for data recovery. If an ingestion fails for some reason, no partial results will be written rather the ingestion will be roll-backed. The commit is implemented using atomic mv operation in HDFS.

Currently, feature groups that we created contain only a single commit each as we’ve just done a single bulk-insert. Lets explore time line of economyFg:

for ((k,v) <- economyFg.commitDetails()){
    println (k,v)
}
(1609945991000,{committedOn=20210106151311, rowsUpdated=0, rowsDeleted=0, rowsInserted=4})

Inspect results

economyFg.read().show()
+---+---------+----------+-----+--------+------+---------+----+
| id|   salary|commission|  car|  hvalue|hyears|     loan|year|
+---+---------+----------+-----+--------+------+---------+----+
|  2|140893.77|       0.0|car20|135000.0|     2|395015.34|2020|
|  1|110499.73|       0.0|car15|235000.0|    30| 354724.2|2020|
|  4|  20000.0|  52593.63| car9|185000.0|    30| 99629.62|2020|
|  3|119159.65|       0.0| car1|145000.0|    22|122025.08|2020|
+---+---------+----------+-----+--------+------+---------+----+
demographyFg.read().show()
+---+---+------+--------+
|age| id|elevel| zipcode|
+---+---+------+--------+
| 44|  2|level4|zipcode8|
| 56|  4|level0|zipcode2|
| 54|  1|level3|zipcode5|
| 49|  3|level2|zipcode4|
+---+---+------+--------+
classFg.read().show()
+---+------+----+
| id| class|year|
+---+------+----+
|  3|groupB|2020|
|  4|groupB|2020|
|  2|groupB|2020|
|  1|groupB|2020|
+---+------+----+

Upsert new data into a Feature Group

So far we have not done anything time travel special, we simply did a regular bulk-insert of some data into a Hudi enabled feature group. We could have done the same thing using just regular None Hudi enabled Feature group. However now we will look into how we can do upserts, and how Hopsworks Feature store enables us to do this efficiently.

Generate Sample Upserts Data

val economyUpsertData = Seq(
    Row(1, 120499.73f, 0.0f, "car17", 205000.0f, 30, 564724.18f, 2020),    //update
    Row(2, 160893.77f, 0.0f, "car10", 179000.0f, 2, 455015.33f, 2020),     //update
    Row(5, 93956.32f, 0.0f, "car15",  135000.0f, 1, 458679.82f, 2020),     //insert
    Row(6, 41365.43f, 52809.15f, "car7", 135000.0f, 19, 216839.71f, 2020), //insert
    Row(7, 94805.61f, 0.0f, "car17", 135000.0f, 23, 233216.07f, 2020)      //insert
)

val economyUpsertDf = spark.createDataFrame(
  spark.sparkContext.parallelize(economyUpsertData),
  StructType(economyFgSchema)
)

economyUpsertDf.show(5)
economyUpsertData: Seq[org.apache.spark.sql.Row] = List([1,120499.73,0.0,car17,205000.0,30,564724.2,2020], [2,160893.77,0.0,car10,179000.0,2,455015.34,2020], [5,93956.32,0.0,car15,135000.0,1,458679.8,2020], [6,41365.43,52809.15,car7,135000.0,19,216839.7,2020], [7,94805.61,0.0,car17,135000.0,23,233216.06,2020])
economyUpsertDf: org.apache.spark.sql.DataFrame = [id: int, salary: float ... 6 more fields]
+---+---------+----------+-----+--------+------+---------+----+
| id|   salary|commission|  car|  hvalue|hyears|     loan|year|
+---+---------+----------+-----+--------+------+---------+----+
|  1|120499.73|       0.0|car17|205000.0|    30| 564724.2|2020|
|  2|160893.77|       0.0|car10|179000.0|     2|455015.34|2020|
|  5| 93956.32|       0.0|car15|135000.0|     1| 458679.8|2020|
|  6| 41365.43|  52809.15| car7|135000.0|    19| 216839.7|2020|
|  7| 94805.61|       0.0|car17|135000.0|    23|233216.06|2020|
+---+---------+----------+-----+--------+------+---------+----+
val demographicUpsertData = Seq(
    Row(2, 44, "level1", "zipcode8"),     //update
    Row(5, 59, "level1", "zipcode2"),     //insert
    Row(6, 71, "level2", "zipcode3"),     //insert
    Row(7, 32, "level1", "zipcode2")      //insert
)

val demographicUpsertDf = spark.createDataFrame(
    spark.sparkContext.parallelize(demographicUpsertData),
    StructType(demographicFgSchema)
)

demographicUpsertDf.show()
demographicUpsertData: Seq[org.apache.spark.sql.Row] = List([2,44,level1,zipcode8], [5,59,level1,zipcode2], [6,71,level2,zipcode3], [7,32,level1,zipcode2])
demographicUpsertDf: org.apache.spark.sql.DataFrame = [id: int, age: int ... 2 more fields]
+---+---+------+--------+
| id|age|elevel| zipcode|
+---+---+------+--------+
|  2| 44|level1|zipcode8|
|  5| 59|level1|zipcode2|
|  6| 71|level2|zipcode3|
|  7| 32|level1|zipcode2|
+---+---+------+--------+
val classUpsertData = Seq(
    Row(1, "groupA", 2020), //update
    Row(5, "groupA", 2020), //insert
    Row(6, "groupA", 2020), //insert
    Row(7, "groupA", 2020)  //insert
) 

val classUpsertDf = spark.createDataFrame(
    spark.sparkContext.parallelize(classUpsertData),
    StructType(classFgSchema)
)

classUpsertDf.show()
classUpsertData: Seq[org.apache.spark.sql.Row] = List([1,groupA,2020], [5,groupA,2020], [6,groupA,2020], [7,groupA,2020])
classUpsertDf: org.apache.spark.sql.DataFrame = [id: int, class: string ... 1 more field]
+---+------+----+
| id| class|year|
+---+------+----+
|  1|groupA|2020|
|  5|groupA|2020|
|  6|groupA|2020|
|  7|groupA|2020|
+---+------+----+

Make the Upsert using Hopsworks Feature Store API

In Hopsworks Feature Store issuing insert method on Apache Hudi enabled feature group will by default perform Upsert operation which means to either insert a new row, or on the basis of parimary and partition keys update already existing one.

economyFg.insert(economyUpsertDf, extra_hudi_options)
demographyFg.insert(demographicUpsertDf, extra_hudi_options)
classFg.insert(classUpsertDf, extra_hudi_options)

Inspect the results

Notice that although Hudi enabled Feature group stores the old value of the records from the previous commit, when you query it will only return the values of the latest commit.

economyFg.read().show()
+---+---------+----------+-----+--------+------+---------+----+
| id|   salary|commission|  car|  hvalue|hyears|     loan|year|
+---+---------+----------+-----+--------+------+---------+----+
|  1|120499.73|       0.0|car17|205000.0|    30| 564724.2|2020|
|  5| 93956.32|       0.0|car15|135000.0|     1| 458679.8|2020|
|  6| 41365.43|  52809.15| car7|135000.0|    19| 216839.7|2020|
|  7| 94805.61|       0.0|car17|135000.0|    23|233216.06|2020|
|  2|160893.77|       0.0|car10|179000.0|     2|455015.34|2020|
|  4|  20000.0|  52593.63| car9|185000.0|    30| 99629.62|2020|
|  3|119159.65|       0.0| car1|145000.0|    22|122025.08|2020|
+---+---------+----------+-----+--------+------+---------+----+
demographyFg.read().show()
+---+---+------+--------+
|age| id|elevel| zipcode|
+---+---+------+--------+
| 71|  6|level2|zipcode3|
| 44|  2|level1|zipcode8|
| 54|  1|level3|zipcode5|
| 49|  3|level2|zipcode4|
| 56|  4|level0|zipcode2|
| 59|  5|level1|zipcode2|
| 32|  7|level1|zipcode2|
+---+---+------+--------+
classFg.read.show()
+---+------+----+
| id| class|year|
+---+------+----+
|  1|groupA|2020|
|  4|groupB|2020|
|  2|groupB|2020|
|  3|groupB|2020|
|  5|groupA|2020|
|  6|groupA|2020|
|  7|groupA|2020|
+---+------+----+

Inspect the updated commit timeline of economyFg

for ((k,v) <- economyFg.commitDetails()){
    println (k,v)
}
(1609945991000,{committedOn=20210106151311, rowsUpdated=0, rowsDeleted=0, rowsInserted=4})
(1609946149000,{committedOn=20210106151549, rowsUpdated=2, rowsDeleted=0, rowsInserted=3})
for ((k,v) <- demographyFg.commitDetails()){
    println (k,v)
}
(1609946224000,{committedOn=20210106151704, rowsUpdated=1, rowsDeleted=0, rowsInserted=3})
(1609946058000,{committedOn=20210106151418, rowsUpdated=0, rowsDeleted=0, rowsInserted=4})
for ((k,v) <- classFg.commitDetails()){
    println (k,v)
}
(1609946089000,{committedOn=20210106151449, rowsUpdated=0, rowsDeleted=0, rowsInserted=4})
(1609946274000,{committedOn=20210106151754, rowsUpdated=1, rowsDeleted=0, rowsInserted=3})

Lets make one more commit to better demostrate time travel capabilities of Hopsworks Feature Store

val economyUpsertData = Seq(    
    Row(8, 64410.62f, 39884.39f, "car20",  125000.0f, 6, 350707.38f, 2020), //insert
    Row(9, 128298.82f, 0.0f, "car19",  135000.0f, 12, 20768.06f, 2020),     //insert
    Row(10,100806.92f, 0.0f, "car8", 135000.0f, 6, 293106.65f, 2020)        //insert   
    
)

val economyUpsertDf = spark.createDataFrame(
  spark.sparkContext.parallelize(economyUpsertData),
  StructType(economyFgSchema)
)

economyUpsertDf.show()
economyUpsertData: Seq[org.apache.spark.sql.Row] = List([8,64410.62,39884.39,car20,125000.0,6,350707.38,2020], [9,128298.82,0.0,car19,135000.0,12,20768.06,2020], [10,100806.92,0.0,car8,135000.0,6,293106.66,2020])
economyUpsertDf: org.apache.spark.sql.DataFrame = [id: int, salary: float ... 6 more fields]
+---+---------+----------+-----+--------+------+---------+----+
| id|   salary|commission|  car|  hvalue|hyears|     loan|year|
+---+---------+----------+-----+--------+------+---------+----+
|  8| 64410.62|  39884.39|car20|125000.0|     6|350707.38|2020|
|  9|128298.82|       0.0|car19|135000.0|    12| 20768.06|2020|
| 10|100806.92|       0.0| car8|135000.0|     6|293106.66|2020|
+---+---------+----------+-----+--------+------+---------+----+
val demographicUpsertData = Seq(    
    Row(8, 33, "level2", "zipcode1"),     //insert
    Row(9, 32, "level1", "zipcode3"),     //insert
    Row(10, 58, "level2", "zipcode5")     //insert    
)

val demographicUpsertDf = spark.createDataFrame(
    spark.sparkContext.parallelize(demographicUpsertData),
    StructType(demographicFgSchema)
)

demographicUpsertDf.show()
demographicUpsertData: Seq[org.apache.spark.sql.Row] = List([8,33,level2,zipcode1], [9,32,level1,zipcode3], [10,58,level2,zipcode5])
demographicUpsertDf: org.apache.spark.sql.DataFrame = [id: int, age: int ... 2 more fields]
+---+---+------+--------+
| id|age|elevel| zipcode|
+---+---+------+--------+
|  8| 33|level2|zipcode1|
|  9| 32|level1|zipcode3|
| 10| 58|level2|zipcode5|
+---+---+------+--------+
val classUpsertData = Seq(
    Row(8, "groupA", 2020), //insert
    Row(9, "groupA", 2020), //insert
    Row(10, "groupB", 2020) //insert    
) 

val classUpsertDf = spark.createDataFrame(
    spark.sparkContext.parallelize(classUpsertData),
    StructType(classFgSchema)
)

classUpsertDf.show()
classUpsertData: Seq[org.apache.spark.sql.Row] = List([8,groupA,2020], [9,groupA,2020], [10,groupB,2020])
classUpsertDf: org.apache.spark.sql.DataFrame = [id: int, class: string ... 1 more field]
+---+------+----+
| id| class|year|
+---+------+----+
|  8|groupA|2020|
|  9|groupA|2020|
| 10|groupB|2020|
+---+------+----+
economyFg.insert(economyUpsertDf, extra_hudi_options)
demographyFg.insert(demographicUpsertDf, extra_hudi_options)
classFg.insert(classUpsertDf, extra_hudi_options)

Time Travel Queries

When read method is issued on FeatureGroup object, whithout any aparameters, most recent view of the Feature group will be returned.

economyFg.read().show()
+---+---------+----------+-----+--------+------+---------+----+
| id|   salary|commission|  car|  hvalue|hyears|     loan|year|
+---+---------+----------+-----+--------+------+---------+----+
|  1|120499.73|       0.0|car17|205000.0|    30| 564724.2|2020|
|  5| 93956.32|       0.0|car15|135000.0|     1| 458679.8|2020|
|  6| 41365.43|  52809.15| car7|135000.0|    19| 216839.7|2020|
|  7| 94805.61|       0.0|car17|135000.0|    23|233216.06|2020|
|  8| 64410.62|  39884.39|car20|125000.0|     6|350707.38|2020|
|  9|128298.82|       0.0|car19|135000.0|    12| 20768.06|2020|
| 10|100806.92|       0.0| car8|135000.0|     6|293106.66|2020|
|  2|160893.77|       0.0|car10|179000.0|     2|455015.34|2020|
|  4|  20000.0|  52593.63| car9|185000.0|    30| 99629.62|2020|
|  3|119159.65|       0.0| car1|145000.0|    22|122025.08|2020|
+---+---------+----------+-----+--------+------+---------+----+

Using the timeline metadata we can inspect the value of a table at a specific point in time, as well as pull changes incrementally.

val commitTimeline = economyFg.commitDetails()
for ((k,v) <- commitTimeline){
    println (k,v)
}
commitTimeline: java.util.Map[String,java.util.Map[String,String]] = {1609945991000={committedOn=20210106151311, rowsUpdated=0, rowsDeleted=0, rowsInserted=4}, 1609946356000={committedOn=20210106151916, rowsUpdated=0, rowsDeleted=0, rowsInserted=3}, 1609946149000={committedOn=20210106151549, rowsUpdated=2, rowsDeleted=0, rowsInserted=3}}
(1609945991000,{committedOn=20210106151311, rowsUpdated=0, rowsDeleted=0, rowsInserted=4})
(1609946356000,{committedOn=20210106151916, rowsUpdated=0, rowsDeleted=0, rowsInserted=3})
(1609946149000,{committedOn=20210106151549, rowsUpdated=2, rowsDeleted=0, rowsInserted=3})
val economyFgCommitTimestamps = economyFg.commitDetails().values().map(c => c.get("committedOn")).toList.sorted
economyFgCommitTimestamps: List[String] = List(20210106151311, 20210106151549, 20210106151916)
// pull 1st commit
economyFg.read(economyFgCommitTimestamps(0)).show()
+---+---------+----------+-----+--------+------+---------+----+
| id|   salary|commission|  car|  hvalue|hyears|     loan|year|
+---+---------+----------+-----+--------+------+---------+----+
|  2|140893.77|       0.0|car20|135000.0|     2|395015.34|2020|
|  1|110499.73|       0.0|car15|235000.0|    30| 354724.2|2020|
|  4|  20000.0|  52593.63| car9|185000.0|    30| 99629.62|2020|
|  3|119159.65|       0.0| car1|145000.0|    22|122025.08|2020|
+---+---------+----------+-----+--------+------+---------+----+
// pull 2nd commit
economyFg.read(economyFgCommitTimestamps(1)).show()
+---+---------+----------+-----+--------+------+---------+----+
| id|   salary|commission|  car|  hvalue|hyears|     loan|year|
+---+---------+----------+-----+--------+------+---------+----+
|  1|120499.73|       0.0|car17|205000.0|    30| 564724.2|2020|
|  5| 93956.32|       0.0|car15|135000.0|     1| 458679.8|2020|
|  6| 41365.43|  52809.15| car7|135000.0|    19| 216839.7|2020|
|  7| 94805.61|       0.0|car17|135000.0|    23|233216.06|2020|
|  2|160893.77|       0.0|car10|179000.0|     2|455015.34|2020|
|  4|  20000.0|  52593.63| car9|185000.0|    30| 99629.62|2020|
|  3|119159.65|       0.0| car1|145000.0|    22|122025.08|2020|
+---+---------+----------+-----+--------+------+---------+----+
// pull 3rd commit
economyFg.read(economyFgCommitTimestamps(2)).show()
+---+---------+----------+-----+--------+------+---------+----+
| id|   salary|commission|  car|  hvalue|hyears|     loan|year|
+---+---------+----------+-----+--------+------+---------+----+
|  1|120499.73|       0.0|car17|205000.0|    30| 564724.2|2020|
|  5| 93956.32|       0.0|car15|135000.0|     1| 458679.8|2020|
|  6| 41365.43|  52809.15| car7|135000.0|    19| 216839.7|2020|
|  7| 94805.61|       0.0|car17|135000.0|    23|233216.06|2020|
|  8| 64410.62|  39884.39|car20|125000.0|     6|350707.38|2020|
|  9|128298.82|       0.0|car19|135000.0|    12| 20768.06|2020|
| 10|100806.92|       0.0| car8|135000.0|     6|293106.66|2020|
|  2|160893.77|       0.0|car10|179000.0|     2|455015.34|2020|
|  4|  20000.0|  52593.63| car9|185000.0|    30| 99629.62|2020|
|  3|119159.65|       0.0| car1|145000.0|    22|122025.08|2020|
+---+---------+----------+-----+--------+------+---------+----+

Hopsworks Feature Store also provides a method for incremental reads:

// Pull changes that happened between the first and second commits
economyFg.readChanges(economyFgCommitTimestamps(0), economyFgCommitTimestamps(1)).show()
+---+---------+----------+-----+--------+------+---------+----+
| id|   salary|commission|  car|  hvalue|hyears|     loan|year|
+---+---------+----------+-----+--------+------+---------+----+
|  1|120499.73|       0.0|car17|205000.0|    30| 564724.2|2020|
|  5| 93956.32|       0.0|car15|135000.0|     1| 458679.8|2020|
|  6| 41365.43|  52809.15| car7|135000.0|    19| 216839.7|2020|
|  7| 94805.61|       0.0|car17|135000.0|    23|233216.06|2020|
|  2|160893.77|       0.0|car10|179000.0|     2|455015.34|2020|
+---+---------+----------+-----+--------+------+---------+----+
// Pull changes that happened between the second and third commits 
economyFg.readChanges(economyFgCommitTimestamps(1), economyFgCommitTimestamps(2)).show()
+---+---------+----------+-----+--------+------+---------+----+
| id|   salary|commission|  car|  hvalue|hyears|     loan|year|
+---+---------+----------+-----+--------+------+---------+----+
|  8| 64410.62|  39884.39|car20|125000.0|     6|350707.38|2020|
|  9|128298.82|       0.0|car19|135000.0|    12| 20768.06|2020|
| 10|100806.92|       0.0| car8|135000.0|     6|293106.66|2020|
+---+---------+----------+-----+--------+------+---------+----+

Join Feature groups that correspond to specific point in time

If we are interetsted to join Feature groups all of them correspong to one specific point in time then we can issue asOf method on join Query object.

val economyFg = fs.getFeatureGroup("economy_fg")
val demographyFg = fs.getFeatureGroup("demography_fg")
val classFg = fs.getFeatureGroup("class_fg")
economyFg: com.logicalclocks.hsfs.FeatureGroup = com.logicalclocks.hsfs.FeatureGroup@5d5f7b53
demographyFg: com.logicalclocks.hsfs.FeatureGroup = com.logicalclocks.hsfs.FeatureGroup@364e0911
classFg: com.logicalclocks.hsfs.FeatureGroup = com.logicalclocks.hsfs.FeatureGroup@261952cf
val joined_features = ((economyFg.selectAll())
                   .join(demographyFg.selectAll(), Seq("id"), JoinType.INNER)
                   .join(classFg.selectAll(), Seq("id"), JoinType.INNER)
                   .asOf(economyFgCommitTimestamps(2)))
joined_features: com.logicalclocks.hsfs.constructor.Query = SELECT `fg2`.`hvalue`, `fg2`.`car`, `fg2`.`commission`, `fg2`.`id`, `fg2`.`loan`, `fg2`.`salary`, `fg2`.`hyears`, `fg2`.`year`, `fg0`.`age`, `fg0`.`id`, `fg0`.`elevel`, `fg0`.`zipcode`, `fg1`.`year`, `fg1`.`id`, `fg1`.`class` FROM `fg2` `fg2` INNER JOIN `fg0` `fg0` ON `fg2`.`id` = `fg0`.`id` INNER JOIN `fg1` `fg1` ON `fg2`.`id` = `fg1`.`id`
joined_features.read().show()
+--------+-----+----------+---+---------+---------+------+----+---+---+------+--------+----+---+------+
|  hvalue|  car|commission| id|     loan|   salary|hyears|year|age| id|elevel| zipcode|year| id| class|
+--------+-----+----------+---+---------+---------+------+----+---+---+------+--------+----+---+------+
|205000.0|car17|       0.0|  1| 564724.2|120499.73|    30|2020| 54|  1|level3|zipcode5|2020|  1|groupA|
|135000.0| car7|  52809.15|  6| 216839.7| 41365.43|    19|2020| 71|  6|level2|zipcode3|2020|  6|groupA|
|145000.0| car1|       0.0|  3|122025.08|119159.65|    22|2020| 49|  3|level2|zipcode4|2020|  3|groupB|
|135000.0|car15|       0.0|  5| 458679.8| 93956.32|     1|2020| 59|  5|level1|zipcode2|2020|  5|groupA|
|185000.0| car9|  52593.63|  4| 99629.62|  20000.0|    30|2020| 56|  4|level0|zipcode2|2020|  4|groupB|
|135000.0|car17|       0.0|  7|233216.06| 94805.61|    23|2020| 32|  7|level1|zipcode2|2020|  7|groupA|
|179000.0|car10|       0.0|  2|455015.34|160893.77|     2|2020| 44|  2|level1|zipcode8|2020|  2|groupB|
+--------+-----+----------+---+---------+---------+------+----+---+---+------+--------+----+---+------+

Join Feature groups that correspond to different points in time

Hopswork Feature store also provides functionality to join Feature groups that correspond to different points in time.

val economyFgQuery = economyFg.selectAll().asOf(economyFgCommitTimestamps(2))

val demographyTimestamps = demographyFg.commitDetails().values().map(c => c.get("committedOn")).toList.sorted
val demographyFgQuery = demographyFg.selectAll().asOf(demographyTimestamps(1))


val classTimestamps = classFg.commitDetails().values().map(c => c.get("committedOn")).toList.sorted
val classFgQuery =  classFg.selectAll().asOf(classTimestamps(0))
economyFgQuery: com.logicalclocks.hsfs.constructor.Query = SELECT `fg0`.`hvalue`, `fg0`.`car`, `fg0`.`commission`, `fg0`.`id`, `fg0`.`loan`, `fg0`.`salary`, `fg0`.`hyears`, `fg0`.`year` FROM `fg0` `fg0`
demographyTimestamps: List[String] = List(20210106151418, 20210106151704, 20210106152022)
demographyFgQuery: com.logicalclocks.hsfs.constructor.Query = SELECT `fg0`.`age`, `fg0`.`id`, `fg0`.`elevel`, `fg0`.`zipcode` FROM `fg0` `fg0`
classTimestamps: List[String] = List(20210106151449, 20210106151754, 20210106152114)
classFgQuery: com.logicalclocks.hsfs.constructor.Query = SELECT `fg0`.`year`, `fg0`.`id`, `fg0`.`class` FROM `fg0` `fg0`
val joined_features = economyFgQuery.join(demographyFgQuery, Seq("id"), JoinType.INNER).join(classFgQuery, Seq("id"), JoinType.INNER)
joined_features: com.logicalclocks.hsfs.constructor.Query = SELECT `fg2`.`hvalue`, `fg2`.`car`, `fg2`.`commission`, `fg2`.`id`, `fg2`.`loan`, `fg2`.`salary`, `fg2`.`hyears`, `fg2`.`year`, `fg0`.`age`, `fg0`.`id`, `fg0`.`elevel`, `fg0`.`zipcode`, `fg1`.`year`, `fg1`.`id`, `fg1`.`class` FROM `fg2` `fg2` INNER JOIN `fg0` `fg0` ON `fg2`.`id` = `fg0`.`id` INNER JOIN `fg1` `fg1` ON `fg2`.`id` = `fg1`.`id`
joined_features.read().show()
+--------+-----+----------+---+---------+---------+------+----+---+---+------+--------+----+---+------+
|  hvalue|  car|commission| id|     loan|   salary|hyears|year|age| id|elevel| zipcode|year| id| class|
+--------+-----+----------+---+---------+---------+------+----+---+---+------+--------+----+---+------+
|205000.0|car17|       0.0|  1| 564724.2|120499.73|    30|2020| 54|  1|level3|zipcode5|2020|  1|groupB|
|145000.0| car1|       0.0|  3|122025.08|119159.65|    22|2020| 49|  3|level2|zipcode4|2020|  3|groupB|
|185000.0| car9|  52593.63|  4| 99629.62|  20000.0|    30|2020| 56|  4|level0|zipcode2|2020|  4|groupB|
|179000.0|car10|       0.0|  2|455015.34|160893.77|     2|2020| 44|  2|level1|zipcode8|2020|  2|groupB|
+--------+-----+----------+---+---------+---------+------+----+---+---+------+--------+----+---+------+