Time-Travel with PySpark

Time travel operations in Hopsworks Feature Store

In this notebook we will introduce time travel operations in Hopsworks Feature Store (HSFS). Currently HSFS supports Apache Hudi (http://hudi.apache.org/) a storage abstraction/library for doing incremental data ingestion to a Hopsworks Feature Store.

Background

Motivation

Traditional ETL typically involves taking a snapshot of a production database and doing a full load into a data lake (typically stored on a distributed file system). Using the snapshot approach for ETL is simple since the snapshot is immutable and can be loaded as an atomic unit into the data lake. However, the con of taking this approach to doing data ingestion is that it is slow. Even if just a single record have been updated since the last data ingestion, the entire table has to be re-written. If you are working with Big Data (TB or PB size datasets) then this will introduce significant data latency and wasted resources (majority of the writes when ingesting the snapshot is redundant as most of the records have not been updated since the last ETL step).

This motivates the use-case for incremental data ingestion. Incremental data ingestion means that only deltas/changelogs since the last ingestion are inserted. With incremental processing, you process data in mini-batches and run the spark job frequently. The incremental model makes better use of resources and makes it easier to do complex processing and joins.

In addition data is rarely immutable in practice. A bank transaction might be reverted, a customer might change his or her home adress, and a customer review might be updated, to give a few examples. This is where Hudi comes into the picture. Hudi stands for Hadoop Upserts anD Incrementals and brings two new primitives for data engineering on distributed file systems (in addition to append/read):

  • Upsert: the ability to do insertions (appends) and updates efficiently.
  • Incremental reads: the ability to read datasets incrementally using the notion of “commits”.

How Hopsworks Feature Store time travel operations can be used for ML and Feature Pipelines

Hudi is integrated in the Hopsworks Feature Store for doing incremental feature computation and for point-in-time correctness and backfilling of feature data.

Incremental Feature Engineering

Examples

Create HUDI time travel enabled feature group and Bulk Insert Sample Dataset

For this demo we will use small sample of the Agarwal Generator that is a widely used dataset. It contains the hypothetical data of people applying for a loan. Rakesh Agrawal, Tomasz Imielinksi, and Arun Swami, "Database Mining: A Performance Perspective", IEEE Transactions on Knowledge and Data Engineering, 5(6), December 1993. <br/><br/>

For simplicity of demo purposes we will split Agarwal dataset into 3 freature groups and manualy create datasets:
  • economy_fg with customer id, salary, loan, value of house, age of house, commission and type of car features;
  • demographic_fg with customer id, age, education level, zip code,
  • class_fg which will contain labels wether loan was approved class B or rejected class A.

Importing necessary libraries

import hsfs
import datetime
from pyspark.sql import DataFrame, Row
from pyspark.sql.types import *
from pyspark.sql.functions import unix_timestamp, from_unixtime

connection = hsfs.connection()
# get a reference to the feature store, you can access also shared feature stores by providing the feature store name
fs = connection.get_feature_store();
Connected. Call `.close()` to terminate connection gracefully.
economy_fg_schema = StructType([
  StructField("id", IntegerType(), True),
  StructField("salary", FloatType(), True),
  StructField("commission", FloatType(), True),
  StructField("car", StringType(), True), 
  StructField("hvalue", FloatType(), True),      
  StructField("hyears", IntegerType(), True),     
  StructField("loan", FloatType(), True),
  StructField("year", IntegerType(), True)    
])

demographic_fg_schema = StructType([
  StructField("id", IntegerType(), True),
  StructField("age", IntegerType(), True),
  StructField("elevel", StringType(), True),   
  StructField("zipcode", StringType(), True)     
])

class_fg_schema =  StructType([
  StructField("id", IntegerType(), True),
  StructField("class", StringType(), True),
  StructField("year", IntegerType(), True)              
])

Create spark dataframes for each Feature groups

economy_bulk_insert_data = [
    Row(1, 110499.73, 0.0,  "car15",  235000.0, 30, 354724.18, 2020),
    Row(2, 140893.77, 0.0,  "car20",  135000.0, 2, 395015.33, 2020),
    Row(3, 119159.65, 0.0,  "car1", 145000.0, 22, 122025.08, 2020),
    Row(4, 20000.0, 52593.63, "car9", 185000.0, 30, 99629.62, 2020)    
]

economy_bulk_insert_df = spark.createDataFrame(economy_bulk_insert_data, economy_fg_schema)
economy_bulk_insert_df.show()
+---+---------+----------+-----+--------+------+---------+----+
| id|   salary|commission|  car|  hvalue|hyears|     loan|year|
+---+---------+----------+-----+--------+------+---------+----+
|  1|110499.73|       0.0|car15|235000.0|    30| 354724.2|2020|
|  2|140893.77|       0.0|car20|135000.0|     2|395015.34|2020|
|  3|119159.65|       0.0| car1|145000.0|    22|122025.08|2020|
|  4|  20000.0|  52593.63| car9|185000.0|    30| 99629.62|2020|
+---+---------+----------+-----+--------+------+---------+----+
demographic_bulk_insert_data = [
    Row(1, 54, "level3", "zipcode5"),
    Row(2, 44, "level4", "zipcode8"),
    Row(3, 49, "level2", "zipcode4"),
    Row(4, 56, "level0", "zipcode2")    
]

demographic_bulk_insert_df = spark.createDataFrame(demographic_bulk_insert_data, demographic_fg_schema)
demographic_bulk_insert_df.show()
+---+---+------+--------+
| id|age|elevel| zipcode|
+---+---+------+--------+
|  1| 54|level3|zipcode5|
|  2| 44|level4|zipcode8|
|  3| 49|level2|zipcode4|
|  4| 56|level0|zipcode2|
+---+---+------+--------+
class_bulk_insert_data = [
    Row(1, "groupB", 2020),
    Row(2, "groupB", 2020),
    Row(3, "groupB", 2020),
    Row(4, "groupB", 2020)    
]

class_bulk_insert_df = spark.createDataFrame(class_bulk_insert_data, class_fg_schema)
class_bulk_insert_df.show()
+---+------+----+
| id| class|year|
+---+------+----+
|  1|groupB|2020|
|  2|groupB|2020|
|  3|groupB|2020|
|  4|groupB|2020|
+---+------+----+

Create feature groups

Now We will create each feature group and enable time travel format HUDI. In Hopsworks Feature Store primary and partition keys are required to be privided for HUDI enabled feature groups.

economy_fg = fs.create_feature_group(
    name = "economy_fg", 
    description = "Hudi Household Economy Feature Group",
    version=2,
    primary_key = ["id"], 
    partition_key = ["year"], 
    hudi_precombine_key = "id", 
    time_travel_format = "HUDI"
)
demography_fg = fs.create_feature_group(
    name = "demography_fg",
    description = "Hudi Demographic Feature Group",
    version = 2,
    primary_key = ["id"],
    partition_key = ["zipcode"],
    time_travel_format="HUDI"
)
class_fg = fs.create_feature_group(
    name = "class_fg", 
    description = "Hudi Class Feature Group", 
    version = 2,
    primary_key = ["id"],
    hudi_precombine_key = "year",
    time_travel_format = "HUDI"
)

Define user provided hudi options

By default, Hudi tends to over-partition input. Recommended shuffle parallelism for hoodie.[insert|upsert|bulkinsert].shuffle.parallelism is atleast input_data_size/500MB

extra_hudi_options = {
    "hoodie.insert.shuffle.parallelism":"1", 
    "hoodie.upsert.shuffle.parallelism":"1",
    "hoodie.parquet.compression.ratio":"0.5"
} 

Bulk insert data into the feature group

Since we have not yet saved any data into newly created feature groups we will use Apache hudi terminology and Bulk Insert data. In HSFS its just issuing save method.

economy_fg.save(economy_bulk_insert_df,write_options=extra_hudi_options)
<hsfs.feature_group.FeatureGroup object at 0x7fe20b5fe350>
demography_fg.save(demographic_bulk_insert_df,write_options=extra_hudi_options)
<hsfs.feature_group.FeatureGroup object at 0x7fe20b57d950>
class_fg.save(class_bulk_insert_df,write_options=extra_hudi_options)
<hsfs.feature_group.FeatureGroup object at 0x7fe20b57d350>

Hopsworks Feature Store Commits

If you thoroughly followed this demo you probably noticed that Hopsworks Feature Store uses Apache Hudi as its time travel engine. Hudi introduces the notion of commits which means that it supports certain properties of traditional databases such as single-table transactions, snapshot isolation, atomic upserts and savepoints for data recovery. If an ingestion fails for some reason, no partial results will be written rather the ingestion will be roll-backed. The commit is implemented using atomic mv operation in HDFS.

Currently, feature groups that we created contain only a single commit each as we’ve just done a single bulk-insert. Lets explore time line of economy_fg:

for item in economy_fg.commit_details().items():
    print(item)
(1609945250000, {'committedOn': '20210106150050', 'rowsUpdated': 0, 'rowsInserted': 4, 'rowsDeleted': 0})

Inspect results

economy_fg.read().show()
+---------+-----+----------+---+---------+--------+------+----+
|     loan|  car|commission| id|   salary|  hvalue|hyears|year|
+---------+-----+----------+---+---------+--------+------+----+
|395015.34|car20|       0.0|  2|140893.77|135000.0|     2|2020|
| 354724.2|car15|       0.0|  1|110499.73|235000.0|    30|2020|
| 99629.62| car9|  52593.63|  4|  20000.0|185000.0|    30|2020|
|122025.08| car1|       0.0|  3|119159.65|145000.0|    22|2020|
+---------+-----+----------+---+---------+--------+------+----+
demography_fg.read().show()
+---+------+---+--------+
|age|elevel| id| zipcode|
+---+------+---+--------+
| 49|level2|  3|zipcode4|
| 54|level3|  1|zipcode5|
| 56|level0|  4|zipcode2|
| 44|level4|  2|zipcode8|
+---+------+---+--------+
class_fg.read().show()
+----+---+------+
|year| id| class|
+----+---+------+
|2020|  3|groupB|
|2020|  4|groupB|
|2020|  2|groupB|
|2020|  1|groupB|
+----+---+------+

Upsert new data into a Feature Group

So far we have not done anything time travel special, we simply did a regular bulk-insert of some data into a Hudi enabled feature group. We could have done the same thing using just regular None Hudi enabled Feature group. However now we will look into how we can do upserts, and how Hopsworks Feature store enables us to do this efficiently.

Generate Sample Upserts Data

economy_upsert_data = [
    Row(1, 120499.73, 0.0, "car17", 205000.0, 30, 564724.18, 2020),    #update
    Row(2, 160893.77, 0.0, "car10", 179000.0, 2, 455015.33, 2020),     #update
    Row(5, 93956.32, 0.0, "car15",  135000.0, 1, 458679.82, 2020),     #insert
    Row(6, 41365.43, 52809.15, "car7", 135000.0, 19, 216839.71, 2020), #insert
    Row(7, 94805.61, 0.0, "car17", 135000.0, 23, 233216.07, 2020)      #insert    
]

economy_upsert_df = spark.createDataFrame(economy_upsert_data, economy_fg_schema)

economy_upsert_df.show(5)
+---+---------+----------+-----+--------+------+---------+----+
| id|   salary|commission|  car|  hvalue|hyears|     loan|year|
+---+---------+----------+-----+--------+------+---------+----+
|  1|120499.73|       0.0|car17|205000.0|    30| 564724.2|2020|
|  2|160893.77|       0.0|car10|179000.0|     2|455015.34|2020|
|  5| 93956.32|       0.0|car15|135000.0|     1| 458679.8|2020|
|  6| 41365.43|  52809.15| car7|135000.0|    19| 216839.7|2020|
|  7| 94805.61|       0.0|car17|135000.0|    23|233216.06|2020|
+---+---------+----------+-----+--------+------+---------+----+
demographic_upsert_data = [
    Row(2, 44, "level1", "zipcode8"),     #update
    Row(5, 59, "level1", "zipcode2"),     #insert
    Row(6, 71, "level2", "zipcode3"),     #insert
    Row(7, 32, "level1", "zipcode2")      #insert    
]

demographic_upsert_df = spark.createDataFrame(demographic_upsert_data, demographic_fg_schema)

demographic_upsert_df.show()
+---+---+------+--------+
| id|age|elevel| zipcode|
+---+---+------+--------+
|  2| 44|level1|zipcode8|
|  5| 59|level1|zipcode2|
|  6| 71|level2|zipcode3|
|  7| 32|level1|zipcode2|
+---+---+------+--------+
class_upsert_data = [
    Row(1, "groupA", 2020), #update
    Row(5, "groupA", 2020), #insert
    Row(6, "groupA", 2020), #insert
    Row(7, "groupA", 2020)  #insert    
] 

class_upsert_df = spark.createDataFrame(class_upsert_data, class_fg_schema)

class_upsert_df.show()
+---+------+----+
| id| class|year|
+---+------+----+
|  1|groupA|2020|
|  5|groupA|2020|
|  6|groupA|2020|
|  7|groupA|2020|
+---+------+----+
economy_upsert_df.show()
+---+---------+----------+-----+--------+------+---------+----+
| id|   salary|commission|  car|  hvalue|hyears|     loan|year|
+---+---------+----------+-----+--------+------+---------+----+
|  1|120499.73|       0.0|car17|205000.0|    30| 564724.2|2020|
|  2|160893.77|       0.0|car10|179000.0|     2|455015.34|2020|
|  5| 93956.32|       0.0|car15|135000.0|     1| 458679.8|2020|
|  6| 41365.43|  52809.15| car7|135000.0|    19| 216839.7|2020|
|  7| 94805.61|       0.0|car17|135000.0|    23|233216.06|2020|
+---+---------+----------+-----+--------+------+---------+----+
economy_fg.read().show(5)
+---------+-----+----------+---+---------+--------+------+----+
|     loan|  car|commission| id|   salary|  hvalue|hyears|year|
+---------+-----+----------+---+---------+--------+------+----+
|395015.34|car20|       0.0|  2|140893.77|135000.0|     2|2020|
| 354724.2|car15|       0.0|  1|110499.73|235000.0|    30|2020|
| 99629.62| car9|  52593.63|  4|  20000.0|185000.0|    30|2020|
|122025.08| car1|       0.0|  3|119159.65|145000.0|    22|2020|
+---------+-----+----------+---+---------+--------+------+----+

Make the Upsert using Hopsworks Feature Store API

In Hopsworks Feature Store issuing insert method on Apache Hudi enabled feature group will by default perform Upsert operation which means to either insert a new row, or on the basis of parimary and partition keys update already existing one.

economy_fg.insert(economy_upsert_df,write_options=extra_hudi_options)
demography_fg.insert(demographic_upsert_df,write_options=extra_hudi_options)
class_fg.insert(class_upsert_df,write_options=extra_hudi_options)

Inspect the results

Notice that although Hudi enabled Feature group stores the old value of the records from the previous commit, when you query it will only return the values of the latest commit.

economy_fg.read().show()
+---------+-----+----------+---+---------+--------+------+----+
|     loan|  car|commission| id|   salary|  hvalue|hyears|year|
+---------+-----+----------+---+---------+--------+------+----+
| 564724.2|car17|       0.0|  1|120499.73|205000.0|    30|2020|
|455015.34|car10|       0.0|  2|160893.77|179000.0|     2|2020|
|122025.08| car1|       0.0|  3|119159.65|145000.0|    22|2020|
| 99629.62| car9|  52593.63|  4|  20000.0|185000.0|    30|2020|
| 458679.8|car15|       0.0|  5| 93956.32|135000.0|     1|2020|
| 216839.7| car7|  52809.15|  6| 41365.43|135000.0|    19|2020|
|233216.06|car17|       0.0|  7| 94805.61|135000.0|    23|2020|
+---------+-----+----------+---+---------+--------+------+----+
demography_fg.read().show()
+---+------+---+--------+
|age|elevel| id| zipcode|
+---+------+---+--------+
| 71|level2|  6|zipcode3|
| 44|level1|  2|zipcode8|
| 49|level2|  3|zipcode4|
| 54|level3|  1|zipcode5|
| 56|level0|  4|zipcode2|
| 59|level1|  5|zipcode2|
| 32|level1|  7|zipcode2|
+---+------+---+--------+
class_fg.read().show()
+----+---+------+
|year| id| class|
+----+---+------+
|2020|  1|groupA|
|2020|  3|groupB|
|2020|  2|groupB|
|2020|  4|groupB|
|2020|  5|groupA|
|2020|  6|groupA|
|2020|  7|groupA|
+----+---+------+

Inspect the updated commit timeline of economyFg

for item in economy_fg.commit_details().items():
    print(item)
(1609945408000, {'committedOn': '20210106150328', 'rowsUpdated': 2, 'rowsInserted': 3, 'rowsDeleted': 0})
(1609945250000, {'committedOn': '20210106150050', 'rowsUpdated': 0, 'rowsInserted': 4, 'rowsDeleted': 0})
for item in demography_fg.commit_details().items():
    print(item)
(1609945480000, {'committedOn': '20210106150440', 'rowsUpdated': 1, 'rowsInserted': 3, 'rowsDeleted': 0})
(1609945319000, {'committedOn': '20210106150159', 'rowsUpdated': 0, 'rowsInserted': 4, 'rowsDeleted': 0})
for item in class_fg.commit_details().items():
    print(item)
(1609945530000, {'committedOn': '20210106150530', 'rowsUpdated': 1, 'rowsInserted': 3, 'rowsDeleted': 0})
(1609945353000, {'committedOn': '20210106150233', 'rowsUpdated': 0, 'rowsInserted': 4, 'rowsDeleted': 0})

Lets make one more commit to better demostrate time travel capabilities of Hopsworks Feature Store

economy_upsert_data = [
    Row(8, 64410.62, 39884.39, "car20",  125000.0, 6, 350707.38, 2020), #insert
    Row(9, 128298.82, 0.0, "car19",  135000.0, 12, 20768.06, 2020),     #insert
    Row(10,100806.92, 0.0, "car8", 135000.0, 6, 293106.65, 2020)        #insert       
]

economy_upsert_df = spark.createDataFrame(economy_upsert_data, economy_fg_schema)

economy_upsert_df.show(5)
+---+---------+----------+-----+--------+------+---------+----+
| id|   salary|commission|  car|  hvalue|hyears|     loan|year|
+---+---------+----------+-----+--------+------+---------+----+
|  8| 64410.62|  39884.39|car20|125000.0|     6|350707.38|2020|
|  9|128298.82|       0.0|car19|135000.0|    12| 20768.06|2020|
| 10|100806.92|       0.0| car8|135000.0|     6|293106.66|2020|
+---+---------+----------+-----+--------+------+---------+----+
demographic_upsert_data = [
    Row(8, 33, "level2", "zipcode1"),     #insert
    Row(9, 32, "level1", "zipcode3"),     #insert
    Row(10, 58, "level2", "zipcode5")     #insert        
]

demographic_upsert_df = spark.createDataFrame(demographic_upsert_data, demographic_fg_schema)

demographic_upsert_df.show(5)
+---+---+------+--------+
| id|age|elevel| zipcode|
+---+---+------+--------+
|  8| 33|level2|zipcode1|
|  9| 32|level1|zipcode3|
| 10| 58|level2|zipcode5|
+---+---+------+--------+
class_upsert_data = [
    Row(8, "groupA", 2020), #insert
    Row(9, "groupA", 2020), #insert
    Row(10, "groupB", 2020) #insert        
]

class_upsert_df = spark.createDataFrame(class_upsert_data, class_fg_schema)

class_upsert_df.show(5)
+---+------+----+
| id| class|year|
+---+------+----+
|  8|groupA|2020|
|  9|groupA|2020|
| 10|groupB|2020|
+---+------+----+
economy_fg.insert(economy_upsert_df,write_options=extra_hudi_options)
demography_fg.insert(demographic_upsert_df,write_options=extra_hudi_options)
class_fg.insert(class_upsert_df,write_options=extra_hudi_options)

Time Travel Queries

When read method is issued on FeatureGroup object, whithout any aparameters, most recent view of the Feature group will be returned.

economy_fg.read().show()
+---------+-----+----------+---+---------+--------+------+----+
|     loan|  car|commission| id|   salary|  hvalue|hyears|year|
+---------+-----+----------+---+---------+--------+------+----+
| 564724.2|car17|       0.0|  1|120499.73|205000.0|    30|2020|
|455015.34|car10|       0.0|  2|160893.77|179000.0|     2|2020|
| 99629.62| car9|  52593.63|  4|  20000.0|185000.0|    30|2020|
| 458679.8|car15|       0.0|  5| 93956.32|135000.0|     1|2020|
| 216839.7| car7|  52809.15|  6| 41365.43|135000.0|    19|2020|
|233216.06|car17|       0.0|  7| 94805.61|135000.0|    23|2020|
|350707.38|car20|  39884.39|  8| 64410.62|125000.0|     6|2020|
| 20768.06|car19|       0.0|  9|128298.82|135000.0|    12|2020|
|293106.66| car8|       0.0| 10|100806.92|135000.0|     6|2020|
|122025.08| car1|       0.0|  3|119159.65|145000.0|    22|2020|
+---------+-----+----------+---+---------+--------+------+----+

Using the timeline metadata we can inspect the value of a table at a specific point in time, as well as pull changes incrementally.

for item in economy_fg.commit_details().items():
    print(item)

commit_timestamps = [economy_fg.commit_details()[c]['committedOn'] for c in sorted(economy_fg.commit_details().keys())]
(1609945604000, {'committedOn': '20210106150644', 'rowsUpdated': 0, 'rowsInserted': 3, 'rowsDeleted': 0})
(1609945408000, {'committedOn': '20210106150328', 'rowsUpdated': 2, 'rowsInserted': 3, 'rowsDeleted': 0})
(1609945250000, {'committedOn': '20210106150050', 'rowsUpdated': 0, 'rowsInserted': 4, 'rowsDeleted': 0})
#pull 1st commit
economy_fg.read(commit_timestamps[0]).show()
+---------+-----+----------+---+---------+--------+------+----+
|     loan|  car|commission| id|   salary|  hvalue|hyears|year|
+---------+-----+----------+---+---------+--------+------+----+
|395015.34|car20|       0.0|  2|140893.77|135000.0|     2|2020|
| 354724.2|car15|       0.0|  1|110499.73|235000.0|    30|2020|
| 99629.62| car9|  52593.63|  4|  20000.0|185000.0|    30|2020|
|122025.08| car1|       0.0|  3|119159.65|145000.0|    22|2020|
+---------+-----+----------+---+---------+--------+------+----+
#pull 2nd commit
economy_fg.read(commit_timestamps[1]).show()
+---------+-----+----------+---+---------+--------+------+----+
|     loan|  car|commission| id|   salary|  hvalue|hyears|year|
+---------+-----+----------+---+---------+--------+------+----+
| 564724.2|car17|       0.0|  1|120499.73|205000.0|    30|2020|
|455015.34|car10|       0.0|  2|160893.77|179000.0|     2|2020|
|122025.08| car1|       0.0|  3|119159.65|145000.0|    22|2020|
| 99629.62| car9|  52593.63|  4|  20000.0|185000.0|    30|2020|
| 458679.8|car15|       0.0|  5| 93956.32|135000.0|     1|2020|
| 216839.7| car7|  52809.15|  6| 41365.43|135000.0|    19|2020|
|233216.06|car17|       0.0|  7| 94805.61|135000.0|    23|2020|
+---------+-----+----------+---+---------+--------+------+----+
#pull 3rd commit
economy_fg.read(commit_timestamps[2]).show()
+---------+-----+----------+---+---------+--------+------+----+
|     loan|  car|commission| id|   salary|  hvalue|hyears|year|
+---------+-----+----------+---+---------+--------+------+----+
| 564724.2|car17|       0.0|  1|120499.73|205000.0|    30|2020|
|455015.34|car10|       0.0|  2|160893.77|179000.0|     2|2020|
| 99629.62| car9|  52593.63|  4|  20000.0|185000.0|    30|2020|
| 458679.8|car15|       0.0|  5| 93956.32|135000.0|     1|2020|
| 216839.7| car7|  52809.15|  6| 41365.43|135000.0|    19|2020|
|233216.06|car17|       0.0|  7| 94805.61|135000.0|    23|2020|
|350707.38|car20|  39884.39|  8| 64410.62|125000.0|     6|2020|
| 20768.06|car19|       0.0|  9|128298.82|135000.0|    12|2020|
|293106.66| car8|       0.0| 10|100806.92|135000.0|     6|2020|
|122025.08| car1|       0.0|  3|119159.65|145000.0|    22|2020|
+---------+-----+----------+---+---------+--------+------+----+

Hopsworks Feature Store also provides a method for incremental reads:

#Pull changes that happened between the first and second commits
economy_fg.read_changes(commit_timestamps[0], commit_timestamps[1]).show()
+---------+-----+----------+---+---------+--------+------+----+
|     loan|  car|commission| id|   salary|  hvalue|hyears|year|
+---------+-----+----------+---+---------+--------+------+----+
| 564724.2|car17|       0.0|  1|120499.73|205000.0|    30|2020|
|455015.34|car10|       0.0|  2|160893.77|179000.0|     2|2020|
| 458679.8|car15|       0.0|  5| 93956.32|135000.0|     1|2020|
| 216839.7| car7|  52809.15|  6| 41365.43|135000.0|    19|2020|
|233216.06|car17|       0.0|  7| 94805.61|135000.0|    23|2020|
+---------+-----+----------+---+---------+--------+------+----+
#Pull changes that happened between the second and third commits 
economy_fg.read_changes(commit_timestamps[1], commit_timestamps[2]).show()
+---------+-----+----------+---+---------+--------+------+----+
|     loan|  car|commission| id|   salary|  hvalue|hyears|year|
+---------+-----+----------+---+---------+--------+------+----+
|350707.38|car20|  39884.39|  8| 64410.62|125000.0|     6|2020|
| 20768.06|car19|       0.0|  9|128298.82|135000.0|    12|2020|
|293106.66| car8|       0.0| 10|100806.92|135000.0|     6|2020|
+---------+-----+----------+---+---------+--------+------+----+

Join Feature groups that correspond to specific point in time

If we are interetsted to join Feature groups all of them correspong to one specific point in time then we can issue as_of method on join Query object.

joined_features = ((economy_fg.select_all())
                   .join(demography_fg.select_all(), ["id"], "INNER")
                   .join(class_fg.select_all(), ["id"], "INNER")
                   .as_of(commit_timestamps[2]))  
joined_features.read().show()
+---------+-----+----------+---+---------+--------+------+----+---+------+---+--------+----+---+------+
|     loan|  car|commission| id|   salary|  hvalue|hyears|year|age|elevel| id| zipcode|year| id| class|
+---------+-----+----------+---+---------+--------+------+----+---+------+---+--------+----+---+------+
| 564724.2|car17|       0.0|  1|120499.73|205000.0|    30|2020| 54|level3|  1|zipcode5|2020|  1|groupA|
| 216839.7| car7|  52809.15|  6| 41365.43|135000.0|    19|2020| 71|level2|  6|zipcode3|2020|  6|groupA|
|122025.08| car1|       0.0|  3|119159.65|145000.0|    22|2020| 49|level2|  3|zipcode4|2020|  3|groupB|
| 458679.8|car15|       0.0|  5| 93956.32|135000.0|     1|2020| 59|level1|  5|zipcode2|2020|  5|groupA|
| 99629.62| car9|  52593.63|  4|  20000.0|185000.0|    30|2020| 56|level0|  4|zipcode2|2020|  4|groupB|
|233216.06|car17|       0.0|  7| 94805.61|135000.0|    23|2020| 32|level1|  7|zipcode2|2020|  7|groupA|
|455015.34|car10|       0.0|  2|160893.77|179000.0|     2|2020| 44|level1|  2|zipcode8|2020|  2|groupB|
+---------+-----+----------+---+---------+--------+------+----+---+------+---+--------+----+---+------+

Join Feature groups that correspond to different points in time

Hopswork Feature store also provides functionality to join Feature groups that correspond to different points in time.

economy_fg_query = economy_fg.select_all().as_of(commit_timestamps[2])

second_demography_commit = demography_fg.commit_details()[sorted(demography_fg.commit_details().keys())[1]]['committedOn']
demography_fg_query = demography_fg.select_all().as_of(second_demography_commit)

first_class_commit = class_fg.commit_details()[sorted(class_fg.commit_details().keys())[0]]['committedOn']
class_fg_query =  class_fg.select_all().as_of(first_class_commit)
joined_features = economy_fg_query.join(demography_fg_query, ["id"], "INNER").join(class_fg_query, ["id"], "INNER")
joined_features.read().show()
+---------+-----+----------+---+---------+--------+------+----+---+------+---+--------+----+---+------+
|     loan|  car|commission| id|   salary|  hvalue|hyears|year|age|elevel| id| zipcode|year| id| class|
+---------+-----+----------+---+---------+--------+------+----+---+------+---+--------+----+---+------+
| 564724.2|car17|       0.0|  1|120499.73|205000.0|    30|2020| 54|level3|  1|zipcode5|2020|  1|groupB|
|122025.08| car1|       0.0|  3|119159.65|145000.0|    22|2020| 49|level2|  3|zipcode4|2020|  3|groupB|
| 99629.62| car9|  52593.63|  4|  20000.0|185000.0|    30|2020| 56|level0|  4|zipcode2|2020|  4|groupB|
|455015.34|car10|       0.0|  2|160893.77|179000.0|     2|2020| 44|level1|  2|zipcode8|2020|  2|groupB|
+---------+-----+----------+---+---------+--------+------+----+---+------+---+--------+----+---+------+