The
The Ipython notebook with the complete code and dataset are available at the Following link.
In this tutorial, we'll apply denoising autoencoder on Market Basket The data for collaborative filtering. The learned model then is able to recommend similar items to users, based on the ones in their. The groceries dataset is used in the tutorial; It consists of 9,835 transactions (or baskets) of the items purchased together. We'll divide the dataset into train, test, and validation. The figure 1 depicts the raw dataset; We need to process it into convert in a format suitable for model input. For doing so, each transaction'll be represented as a binary vector, where one indicates the item being in the basket and Z Ero otherwise. Let us the dataset and define few helper functions, to find a unique items, to convert them into one-hot encoded Representation and convert back to the items from the binary vector. Moreover, The figure 2, provides ten most frequent items in the dataset.
import NumPy as NP import pandas as PD import TensorFlow as TF from sklearn.metrics import roc_auc_score import Matplotl Ib.pyplot as PLT import Seaborn as SNS Sns.set_style ("Whitegrid") cols = [' 1], ' 2 ', ' 3 ', ' 4 ', ' 5 ', ' 6 ', ' 7 ', ' 8 ', ' 9 ', ' 10 ', ' 11 ', ' 12 ', ' 13 ', ' 14 ', ' 15 ', ' 16 ', ' 17 ', ' 18 ', ' 19 ', ' 20 ', ' 21 ', ' 22 ', ' 23 ', ' 24 ', ' 25 ', ' 26 ', ' 27 ', ' 28 ', ' 29 ', ' 30 ', ' 31 ', ' 32 ' DF = pd.read_csv ("groceries.csv", Sep = ",", names = cols, engine = "python") data = Np.array (DF)
def get_unique_items (data): Ncol = data.shape[1] items = set () for C in range (Ncol): items = items.un
Ion (Data[:,c]) items = Np.array (list (items) items = Items[items!= Np.array (None)] return Np.unique (items) def get_onehot_items (data,unique_items): Onehot_items = Np.zeros (len (data), Len (Unique_items)), Dtype = Np.int) for
I, R in enumerate (data): for J, C in Enumerate (Unique_items): onehot_items[i,j] = Int (c in R) Return Onehot_items def Get_items_from_ohe (ohe,unique_items): Return Unique_items[np.flatnonzero (OHE)]
unique_items = Get_unique_items (data) Onehot_items = Np.array (Get_onehot_items (data, unique_items))
n = ten
item_counts = (onehot_items!= 0). SUM (0)
Items_max_args = Item_counts.argsort () [-n:][::-1]
IC = PD. Dataframe ({"Items": Unique_items[items_max_args], \
"Frequency": Item_counts[items_max_args]})
Fig = Plt.figure (figsize = (16,8))
Sns.barplot (x= "Items", y= "Frequency", Data=ic, Palette=sns.color_palette ("Set2", 10 )
Plt.xlabel ("Items")
Plt.ylabel ("Frequency")
plt.title (str (n) + "Most frequent Items in the dataset")
Plt.show ()
Figure 1:raw Market Baset data
Figure 2:most frequent items in dataset
Train_test_split = Np.random.rand (len (onehot_items)) < 0.80
train_x = onehot_items[train_test_split]
Test _x = Onehot_items[~train_test_split]
train_validation_split = Np.random.rand (len (train_x)) < 0.80
validation_x = Train_x[~train_validation_split]
train_x = Train_x[train_validation_split]
The Autoencoder model learns to approximately reconstruct its own input. While doing so, it learns a salient representation of the data. Hence, it is also used for dimensionality reduction purposes. The model consists of two components, an encoder, and a decoder. The encoder maps the input Rn--> Rd and decoder reproduce the input from the reduced dimensions. i.e R