Do you want to try out this notebook? Get a free account (no credit-card reqd) at hopsworks.ai. You can also install open-source Hopsworks or view tutorial videos here.
6. Serve the autoencoder and detect anomalous credit card activity
Query Model Repository for best mnist Model
from hops import model
from hops.model import Metric
MODEL_NAME="ccfraudmodel"
EVALUATION_METRIC="loss"
best_model = model.get_best_model(MODEL_NAME, EVALUATION_METRIC, Metric.MIN)
print('Model name: ' + best_model['name'])
print('Model version: ' + str(best_model['version']))
print(best_model['metrics'])
Model name: ccfraudmodel
Model version: 1
{'loss': '1.74222993850708'}
Create Model Serving of Exported Model
from hops import serving
from hops import hdfs
TOPIC_NAME = "credit_card_prediction_logs"
SERVING_NAME = MODEL_NAME
MODEL_PATH="/Models/" + best_model['name']
TRANSFORMER_PATH = "/Projects/" + hdfs.project_name() + "/Jupyter/card_activity_transformer.py"
response = serving.create_or_update(SERVING_NAME, MODEL_PATH, model_version=best_model['version'], artifact_version="CREATE",
kfserving=True, transformer=TRANSFORMER_PATH,
topic_name=TOPIC_NAME, inference_logging="ALL",
instances=1, transformer_instances=1)
Inferring model server from artifact files: TENSORFLOW_SERVING
Creating serving ccfraudmodel for artifact /Projects/card_fraud_detection//Models/ccfraudmodel ...
Serving ccfraudmodel successfully created
# List all available servings in the project
for s in serving.get_all():
print(s.name)
ccfraudmodel
# Get serving status
serving.get_status(SERVING_NAME)
'Stopped'
Start Model Serving Server
if serving.get_status(SERVING_NAME) == 'Stopped':
serving.start(SERVING_NAME)
Starting serving with name: ccfraudmodel...
Serving with name: ccfraudmodel successfully started
import time
while serving.get_status(SERVING_NAME) != "Running":
time.sleep(5) # Let the serving startup correctly
time.sleep(5)
Sample credit card numbers
import hsfs
connection = hsfs.connection()
fs = connection.get_feature_store()
Connected. Call `.close()` to terminate connection gracefully.
td_meta = fs.get_training_dataset("card_fraud_model", 1)
#`init_prepared_statement` method is needed to get serving_keys in case `get_serving_vector` has not beed called yet. This is not necessary for `get_serving_vector` method itself
td_meta.init_prepared_statement()
td_meta.serving_keys
{'cc_num'}
For demo purposes lets prepare list of primary key values that we are interested in to buils feature vectore from online feature store
cc_nums = fs.get_feature_group("card_transactions", version=1).select("cc_num").read()
Lazily executing query: SELECT `fg0`.`cc_num`
FROM `card_fraud_detection_featurestore`.`card_transactions_1` `fg0`
cc_nums_inputs = cc_nums.sample(n=int(len(cc_nums)/10), replace=True)
len(cc_nums_inputs)
10800
Get serving vector and send to Prediction Requests to the Served Model using Hopsworks REST API
import numpy as np
TOPIC_NAME = serving.get_kafka_topic(SERVING_NAME)
print("Topic: " + TOPIC_NAME)
Topic: credit_card_prediction_logs
i=0
for cc_num in cc_nums_inputs['fg0.cc_num']:
data = { "signature_name": "serving_default", "instances": [{'cc_num': int(cc_num)}] }
response = serving.make_inference_request(SERVING_NAME, data)
if i % 500 == 0:
print(response)
i+=1
{'predictions': [2.20478344]}
{'predictions': [0.955848038]}
{'predictions': [2.77835608]}
{'predictions': [1.0319711]}
{'predictions': [1.04669809]}