{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"machine_shape": "hm",
"gpuType": "V28"
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
},
"accelerator": "TPU"
},
"cells": [
{
"cell_type": "markdown",
"source": [
"### **Business Problem Definition:**\n",
"\n",
"An company has plans to enter new markets with their existing products (P1, P2, P3, P4 and P5). After intensive market research, they’ve decided that the behavior of new market is similar to their existing market.\n",
"\n",
"In their existing market, the sales team has classified all customers into 4 segments (A, B, C, D ). Then, they performed segmented outreach and communication for different segment of customers. This strategy has work exceptionally well for them. They plan to use the same strategy on new markets and have identified 2627 new potential customers.\n",
"\n",
"### As a business analyst you are required to `help the manager to predict the right group allocation` of the new customers."
],
"metadata": {
"id": "gKC9wLVUwOp3"
}
},
{
"cell_type": "markdown",
"source": [
"Variables Description\n",
"\n",
"ID --\tUnique ID\n",
"\n",
"Gender\t-- Gender of the customer\n",
"\n",
"Ever_Married\t-- Marital status of the customer\n",
"\n",
"Age\t-- Age of the customer\n",
"\n",
"Graduated\t-- Is the customer a graduate?\n",
"\n",
"Profession\t-- Profession of the customer\n",
"\n",
"Work_Experience\t-- Work Experience in years\n",
"\n",
"Spending_Score\t-- Spending score of the customer\n",
"\n",
"Family_Size\t-- Number of family members for the customer(including the customer)\n",
"\n",
"Var_1\t-- Anonymised Category for the customer\n",
"\n",
"Segmentation(target)\t-- Customer Segment of the customer"
],
"metadata": {
"id": "43Znaj5Av_fb"
}
},
{
"cell_type": "code",
"source": [
"# Importing libraries\n",
"import pandas as pd\n",
"import numpy as np\n",
"\n",
"import seaborn as sns\n",
"import matplotlib.pyplot as plt"
],
"metadata": {
"id": "DPo_-1D6wYGQ"
},
"execution_count": 1,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Loading the train data\n",
"df = pd.read_csv('Train.csv')\n",
"\n",
"# Looking top 10 rows\n",
"df.head(10)"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 363
},
"id": "PC9zyxL2wtyj",
"outputId": "1c778f9e-fc89-4ce9-e393-6fcb7b568af1"
},
"execution_count": 2,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
" ID Gender Ever_Married Age Graduated Profession Work_Experience \\\n",
"0 462809 Male No 22 No Healthcare 1.0 \n",
"1 462643 Female Yes 38 Yes Engineer NaN \n",
"2 466315 Female Yes 67 Yes Engineer 1.0 \n",
"3 461735 Male Yes 67 Yes Lawyer 0.0 \n",
"4 462669 Female Yes 40 Yes Entertainment NaN \n",
"5 461319 Male Yes 56 No Artist 0.0 \n",
"6 460156 Male No 32 Yes Healthcare 1.0 \n",
"7 464347 Female No 33 Yes Healthcare 1.0 \n",
"8 465015 Female Yes 61 Yes Engineer 0.0 \n",
"9 465176 Female Yes 55 Yes Artist 1.0 \n",
"\n",
" Spending_Score Family_Size Var_1 Segmentation \n",
"0 Low 4.0 Cat_4 D \n",
"1 Average 3.0 Cat_4 A \n",
"2 Low 1.0 Cat_6 B \n",
"3 High 2.0 Cat_6 B \n",
"4 High 6.0 Cat_6 A \n",
"5 Average 2.0 Cat_6 C \n",
"6 Low 3.0 Cat_6 C \n",
"7 Low 3.0 Cat_6 D \n",
"8 Low 3.0 Cat_7 D \n",
"9 Average 4.0 Cat_6 C "
],
"text/html": [
"\n",
"
\n",
"
\n",
"\n",
"
\n",
" \n",
" \n",
" \n",
" ID \n",
" Gender \n",
" Ever_Married \n",
" Age \n",
" Graduated \n",
" Profession \n",
" Work_Experience \n",
" Spending_Score \n",
" Family_Size \n",
" Var_1 \n",
" Segmentation \n",
" \n",
" \n",
" \n",
" \n",
" 0 \n",
" 462809 \n",
" Male \n",
" No \n",
" 22 \n",
" No \n",
" Healthcare \n",
" 1.0 \n",
" Low \n",
" 4.0 \n",
" Cat_4 \n",
" D \n",
" \n",
" \n",
" 1 \n",
" 462643 \n",
" Female \n",
" Yes \n",
" 38 \n",
" Yes \n",
" Engineer \n",
" NaN \n",
" Average \n",
" 3.0 \n",
" Cat_4 \n",
" A \n",
" \n",
" \n",
" 2 \n",
" 466315 \n",
" Female \n",
" Yes \n",
" 67 \n",
" Yes \n",
" Engineer \n",
" 1.0 \n",
" Low \n",
" 1.0 \n",
" Cat_6 \n",
" B \n",
" \n",
" \n",
" 3 \n",
" 461735 \n",
" Male \n",
" Yes \n",
" 67 \n",
" Yes \n",
" Lawyer \n",
" 0.0 \n",
" High \n",
" 2.0 \n",
" Cat_6 \n",
" B \n",
" \n",
" \n",
" 4 \n",
" 462669 \n",
" Female \n",
" Yes \n",
" 40 \n",
" Yes \n",
" Entertainment \n",
" NaN \n",
" High \n",
" 6.0 \n",
" Cat_6 \n",
" A \n",
" \n",
" \n",
" 5 \n",
" 461319 \n",
" Male \n",
" Yes \n",
" 56 \n",
" No \n",
" Artist \n",
" 0.0 \n",
" Average \n",
" 2.0 \n",
" Cat_6 \n",
" C \n",
" \n",
" \n",
" 6 \n",
" 460156 \n",
" Male \n",
" No \n",
" 32 \n",
" Yes \n",
" Healthcare \n",
" 1.0 \n",
" Low \n",
" 3.0 \n",
" Cat_6 \n",
" C \n",
" \n",
" \n",
" 7 \n",
" 464347 \n",
" Female \n",
" No \n",
" 33 \n",
" Yes \n",
" Healthcare \n",
" 1.0 \n",
" Low \n",
" 3.0 \n",
" Cat_6 \n",
" D \n",
" \n",
" \n",
" 8 \n",
" 465015 \n",
" Female \n",
" Yes \n",
" 61 \n",
" Yes \n",
" Engineer \n",
" 0.0 \n",
" Low \n",
" 3.0 \n",
" Cat_7 \n",
" D \n",
" \n",
" \n",
" 9 \n",
" 465176 \n",
" Female \n",
" Yes \n",
" 55 \n",
" Yes \n",
" Artist \n",
" 1.0 \n",
" Average \n",
" 4.0 \n",
" Cat_6 \n",
" C \n",
" \n",
" \n",
"
\n",
"
\n",
"
\n",
"
\n"
],
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "dataframe",
"variable_name": "df",
"summary": "{\n \"name\": \"df\",\n \"rows\": 8068,\n \"fields\": [\n {\n \"column\": \"ID\",\n \"properties\": {\n \"dtype\": \"number\",\n \"std\": 2595,\n \"min\": 458982,\n \"max\": 467974,\n \"num_unique_values\": 8068,\n \"samples\": [\n 467287,\n 466142,\n 465257\n ],\n \"semantic_type\": \"\",\n \"description\": \"\"\n }\n },\n {\n \"column\": \"Gender\",\n \"properties\": {\n \"dtype\": \"category\",\n \"num_unique_values\": 2,\n \"samples\": [\n \"Female\",\n \"Male\"\n ],\n \"semantic_type\": \"\",\n \"description\": \"\"\n }\n },\n {\n \"column\": \"Ever_Married\",\n \"properties\": {\n \"dtype\": \"category\",\n \"num_unique_values\": 2,\n \"samples\": [\n \"Yes\",\n \"No\"\n ],\n \"semantic_type\": \"\",\n \"description\": \"\"\n }\n },\n {\n \"column\": \"Age\",\n \"properties\": {\n \"dtype\": \"number\",\n \"std\": 16,\n \"min\": 18,\n \"max\": 89,\n \"num_unique_values\": 67,\n \"samples\": [\n 30,\n 49\n ],\n \"semantic_type\": \"\",\n \"description\": \"\"\n }\n },\n {\n \"column\": \"Graduated\",\n \"properties\": {\n \"dtype\": \"category\",\n \"num_unique_values\": 2,\n \"samples\": [\n \"Yes\",\n \"No\"\n ],\n \"semantic_type\": \"\",\n \"description\": \"\"\n }\n },\n {\n \"column\": \"Profession\",\n \"properties\": {\n \"dtype\": \"category\",\n \"num_unique_values\": 9,\n \"samples\": [\n \"Homemaker\",\n \"Engineer\"\n ],\n \"semantic_type\": \"\",\n \"description\": \"\"\n }\n },\n {\n \"column\": \"Work_Experience\",\n \"properties\": {\n \"dtype\": \"number\",\n \"std\": 3.406762985458083,\n \"min\": 0.0,\n \"max\": 14.0,\n \"num_unique_values\": 15,\n \"samples\": [\n 14.0,\n 2.0\n ],\n \"semantic_type\": \"\",\n \"description\": \"\"\n }\n },\n {\n \"column\": \"Spending_Score\",\n \"properties\": {\n \"dtype\": \"category\",\n \"num_unique_values\": 3,\n \"samples\": [\n \"Low\",\n \"Average\"\n ],\n \"semantic_type\": \"\",\n \"description\": \"\"\n }\n },\n {\n \"column\": \"Family_Size\",\n \"properties\": {\n \"dtype\": \"number\",\n \"std\": 1.5314132820253756,\n \"min\": 1.0,\n \"max\": 9.0,\n \"num_unique_values\": 9,\n \"samples\": [\n 7.0,\n 3.0\n ],\n \"semantic_type\": \"\",\n \"description\": \"\"\n }\n },\n {\n \"column\": \"Var_1\",\n \"properties\": {\n \"dtype\": \"category\",\n \"num_unique_values\": 7,\n \"samples\": [\n \"Cat_4\",\n \"Cat_6\"\n ],\n \"semantic_type\": \"\",\n \"description\": \"\"\n }\n },\n {\n \"column\": \"Segmentation\",\n \"properties\": {\n \"dtype\": \"category\",\n \"num_unique_values\": 4,\n \"samples\": [\n \"A\",\n \"C\"\n ],\n \"semantic_type\": \"\",\n \"description\": \"\"\n }\n }\n ]\n}"
}
},
"metadata": {},
"execution_count": 2
}
]
},
{
"cell_type": "code",
"source": [
"print ('Number of samples: ',len(df))"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "BDQiCoXjye8W",
"outputId": "001a19df-be51-4df6-beb2-5796c56fa1a4"
},
"execution_count": 3,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Number of samples: 8068\n"
]
}
]
},
{
"cell_type": "code",
"source": [
"# Looking the bigger picture\n",
"df.info()"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "3bNPS7gRxAsr",
"outputId": "d63534eb-44a3-448a-a915-b8fa24a96fdb"
},
"execution_count": 4,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"\n",
"RangeIndex: 8068 entries, 0 to 8067\n",
"Data columns (total 11 columns):\n",
" # Column Non-Null Count Dtype \n",
"--- ------ -------------- ----- \n",
" 0 ID 8068 non-null int64 \n",
" 1 Gender 8068 non-null object \n",
" 2 Ever_Married 7928 non-null object \n",
" 3 Age 8068 non-null int64 \n",
" 4 Graduated 7990 non-null object \n",
" 5 Profession 7944 non-null object \n",
" 6 Work_Experience 7239 non-null float64\n",
" 7 Spending_Score 8068 non-null object \n",
" 8 Family_Size 7733 non-null float64\n",
" 9 Var_1 7992 non-null object \n",
" 10 Segmentation 8068 non-null object \n",
"dtypes: float64(2), int64(2), object(7)\n",
"memory usage: 693.5+ KB\n"
]
}
]
},
{
"cell_type": "code",
"source": [
"def fill_missing_values(df):\n",
" # Replace missing values for numeric columns with median\n",
" numeric_cols = df.select_dtypes(include=['float64', 'int64']).columns\n",
" for col in numeric_cols:\n",
" df[col].fillna(df[col].median(), inplace=True)\n",
"\n",
" # Replace missing values for categorical columns with mode\n",
" categorical_cols = df.select_dtypes(include=['object']).columns\n",
" for col in categorical_cols:\n",
" df[col].fillna(df[col].mode()[0], inplace=True)\n",
"\n",
" # Check if all missing values are filled\n",
" if df.isnull().sum().sum() == 0:\n",
" print(\"All missing values have been replaced.\")\n",
" else:\n",
" print(\"Some missing values remain.\")\n",
"\n",
" return df\n"
],
"metadata": {
"id": "Leot91jg6FCt"
},
"execution_count": 5,
"outputs": []
},
{
"cell_type": "code",
"source": [
"df = fill_missing_values(df)\n",
"df.isnull().sum()"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 447
},
"id": "LFegjW9k6Pj0",
"outputId": "b2e5e3c5-a274-42f1-e14f-6c7a7b21e8d7"
},
"execution_count": 6,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"All missing values have been replaced.\n"
]
},
{
"output_type": "execute_result",
"data": {
"text/plain": [
"ID 0\n",
"Gender 0\n",
"Ever_Married 0\n",
"Age 0\n",
"Graduated 0\n",
"Profession 0\n",
"Work_Experience 0\n",
"Spending_Score 0\n",
"Family_Size 0\n",
"Var_1 0\n",
"Segmentation 0\n",
"dtype: int64"
],
"text/html": [
"\n",
"\n",
"
\n",
" \n",
" \n",
" \n",
" 0 \n",
" \n",
" \n",
" \n",
" \n",
" ID \n",
" 0 \n",
" \n",
" \n",
" Gender \n",
" 0 \n",
" \n",
" \n",
" Ever_Married \n",
" 0 \n",
" \n",
" \n",
" Age \n",
" 0 \n",
" \n",
" \n",
" Graduated \n",
" 0 \n",
" \n",
" \n",
" Profession \n",
" 0 \n",
" \n",
" \n",
" Work_Experience \n",
" 0 \n",
" \n",
" \n",
" Spending_Score \n",
" 0 \n",
" \n",
" \n",
" Family_Size \n",
" 0 \n",
" \n",
" \n",
" Var_1 \n",
" 0 \n",
" \n",
" \n",
" Segmentation \n",
" 0 \n",
" \n",
" \n",
"
\n",
"
dtype: int64 "
]
},
"metadata": {},
"execution_count": 6
}
]
},
{
"cell_type": "code",
"source": [
"df.info()"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "4FBrfedp8zzn",
"outputId": "e036366e-6a19-41aa-9803-19c87e4b6072"
},
"execution_count": 7,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"\n",
"RangeIndex: 8068 entries, 0 to 8067\n",
"Data columns (total 11 columns):\n",
" # Column Non-Null Count Dtype \n",
"--- ------ -------------- ----- \n",
" 0 ID 8068 non-null int64 \n",
" 1 Gender 8068 non-null object \n",
" 2 Ever_Married 8068 non-null object \n",
" 3 Age 8068 non-null int64 \n",
" 4 Graduated 8068 non-null object \n",
" 5 Profession 8068 non-null object \n",
" 6 Work_Experience 8068 non-null float64\n",
" 7 Spending_Score 8068 non-null object \n",
" 8 Family_Size 8068 non-null float64\n",
" 9 Var_1 8068 non-null object \n",
" 10 Segmentation 8068 non-null object \n",
"dtypes: float64(2), int64(2), object(7)\n",
"memory usage: 693.5+ KB\n"
]
}
]
},
{
"cell_type": "code",
"source": [
"dfOnlyFeatures = df.drop(columns=['Segmentation', 'ID'])\n",
"\n",
"# Verify the structure of the new dataframe\n",
"dfOnlyFeatures.info()"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "NUtFMhqO9xa-",
"outputId": "63ed3087-fbb8-4d6d-9362-8a137b0717af"
},
"execution_count": 9,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"\n",
"RangeIndex: 8068 entries, 0 to 8067\n",
"Data columns (total 9 columns):\n",
" # Column Non-Null Count Dtype \n",
"--- ------ -------------- ----- \n",
" 0 Gender 8068 non-null object \n",
" 1 Ever_Married 8068 non-null object \n",
" 2 Age 8068 non-null int64 \n",
" 3 Graduated 8068 non-null object \n",
" 4 Profession 8068 non-null object \n",
" 5 Work_Experience 8068 non-null float64\n",
" 6 Spending_Score 8068 non-null object \n",
" 7 Family_Size 8068 non-null float64\n",
" 8 Var_1 8068 non-null object \n",
"dtypes: float64(2), int64(1), object(6)\n",
"memory usage: 567.4+ KB\n"
]
}
]
},
{
"cell_type": "code",
"source": [
"# Dummification of categorical variables\n",
"# Dummify (One-Hot Encode) the categorical variables\n",
"df_dummified = pd.get_dummies(dfOnlyFeatures)\n",
"\n",
"# Display the first few rows of the dummified dataset\n",
"df_dummified.head()"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 255
},
"id": "jG0Tr-qC7RgG",
"outputId": "77358ed6-cb37-4e72-df8b-fba739e3095e"
},
"execution_count": 10,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
" Age Work_Experience Family_Size Gender_Female Gender_Male \\\n",
"0 22 1.0 4.0 False True \n",
"1 38 1.0 3.0 True False \n",
"2 67 1.0 1.0 True False \n",
"3 67 0.0 2.0 False True \n",
"4 40 1.0 6.0 True False \n",
"\n",
" Ever_Married_No Ever_Married_Yes Graduated_No Graduated_Yes \\\n",
"0 True False True False \n",
"1 False True False True \n",
"2 False True False True \n",
"3 False True False True \n",
"4 False True False True \n",
"\n",
" Profession_Artist ... Spending_Score_Average Spending_Score_High \\\n",
"0 False ... False False \n",
"1 False ... True False \n",
"2 False ... False False \n",
"3 False ... False True \n",
"4 False ... False True \n",
"\n",
" Spending_Score_Low Var_1_Cat_1 Var_1_Cat_2 Var_1_Cat_3 Var_1_Cat_4 \\\n",
"0 True False False False True \n",
"1 False False False False True \n",
"2 True False False False False \n",
"3 False False False False False \n",
"4 False False False False False \n",
"\n",
" Var_1_Cat_5 Var_1_Cat_6 Var_1_Cat_7 \n",
"0 False False False \n",
"1 False False False \n",
"2 False True False \n",
"3 False True False \n",
"4 False True False \n",
"\n",
"[5 rows x 28 columns]"
],
"text/html": [
"\n",
" \n",
"
\n",
"\n",
"
\n",
" \n",
" \n",
" \n",
" Age \n",
" Work_Experience \n",
" Family_Size \n",
" Gender_Female \n",
" Gender_Male \n",
" Ever_Married_No \n",
" Ever_Married_Yes \n",
" Graduated_No \n",
" Graduated_Yes \n",
" Profession_Artist \n",
" ... \n",
" Spending_Score_Average \n",
" Spending_Score_High \n",
" Spending_Score_Low \n",
" Var_1_Cat_1 \n",
" Var_1_Cat_2 \n",
" Var_1_Cat_3 \n",
" Var_1_Cat_4 \n",
" Var_1_Cat_5 \n",
" Var_1_Cat_6 \n",
" Var_1_Cat_7 \n",
" \n",
" \n",
" \n",
" \n",
" 0 \n",
" 22 \n",
" 1.0 \n",
" 4.0 \n",
" False \n",
" True \n",
" True \n",
" False \n",
" True \n",
" False \n",
" False \n",
" ... \n",
" False \n",
" False \n",
" True \n",
" False \n",
" False \n",
" False \n",
" True \n",
" False \n",
" False \n",
" False \n",
" \n",
" \n",
" 1 \n",
" 38 \n",
" 1.0 \n",
" 3.0 \n",
" True \n",
" False \n",
" False \n",
" True \n",
" False \n",
" True \n",
" False \n",
" ... \n",
" True \n",
" False \n",
" False \n",
" False \n",
" False \n",
" False \n",
" True \n",
" False \n",
" False \n",
" False \n",
" \n",
" \n",
" 2 \n",
" 67 \n",
" 1.0 \n",
" 1.0 \n",
" True \n",
" False \n",
" False \n",
" True \n",
" False \n",
" True \n",
" False \n",
" ... \n",
" False \n",
" False \n",
" True \n",
" False \n",
" False \n",
" False \n",
" False \n",
" False \n",
" True \n",
" False \n",
" \n",
" \n",
" 3 \n",
" 67 \n",
" 0.0 \n",
" 2.0 \n",
" False \n",
" True \n",
" False \n",
" True \n",
" False \n",
" True \n",
" False \n",
" ... \n",
" False \n",
" True \n",
" False \n",
" False \n",
" False \n",
" False \n",
" False \n",
" False \n",
" True \n",
" False \n",
" \n",
" \n",
" 4 \n",
" 40 \n",
" 1.0 \n",
" 6.0 \n",
" True \n",
" False \n",
" False \n",
" True \n",
" False \n",
" True \n",
" False \n",
" ... \n",
" False \n",
" True \n",
" False \n",
" False \n",
" False \n",
" False \n",
" False \n",
" False \n",
" True \n",
" False \n",
" \n",
" \n",
"
\n",
"
5 rows × 28 columns
\n",
"
\n",
"
\n",
"
\n"
],
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "dataframe",
"variable_name": "df_dummified"
}
},
"metadata": {},
"execution_count": 10
}
]
},
{
"cell_type": "code",
"source": [
"df1 = df_dummified.copy()\n",
"df1.head()\n"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 255
},
"id": "X19Ztv4vP6Mk",
"outputId": "78f7aeaa-0111-45ba-f233-990fe9092789"
},
"execution_count": 11,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
" Age Work_Experience Family_Size Gender_Female Gender_Male \\\n",
"0 22 1.0 4.0 False True \n",
"1 38 1.0 3.0 True False \n",
"2 67 1.0 1.0 True False \n",
"3 67 0.0 2.0 False True \n",
"4 40 1.0 6.0 True False \n",
"\n",
" Ever_Married_No Ever_Married_Yes Graduated_No Graduated_Yes \\\n",
"0 True False True False \n",
"1 False True False True \n",
"2 False True False True \n",
"3 False True False True \n",
"4 False True False True \n",
"\n",
" Profession_Artist ... Spending_Score_Average Spending_Score_High \\\n",
"0 False ... False False \n",
"1 False ... True False \n",
"2 False ... False False \n",
"3 False ... False True \n",
"4 False ... False True \n",
"\n",
" Spending_Score_Low Var_1_Cat_1 Var_1_Cat_2 Var_1_Cat_3 Var_1_Cat_4 \\\n",
"0 True False False False True \n",
"1 False False False False True \n",
"2 True False False False False \n",
"3 False False False False False \n",
"4 False False False False False \n",
"\n",
" Var_1_Cat_5 Var_1_Cat_6 Var_1_Cat_7 \n",
"0 False False False \n",
"1 False False False \n",
"2 False True False \n",
"3 False True False \n",
"4 False True False \n",
"\n",
"[5 rows x 28 columns]"
],
"text/html": [
"\n",
" \n",
"
\n",
"\n",
"
\n",
" \n",
" \n",
" \n",
" Age \n",
" Work_Experience \n",
" Family_Size \n",
" Gender_Female \n",
" Gender_Male \n",
" Ever_Married_No \n",
" Ever_Married_Yes \n",
" Graduated_No \n",
" Graduated_Yes \n",
" Profession_Artist \n",
" ... \n",
" Spending_Score_Average \n",
" Spending_Score_High \n",
" Spending_Score_Low \n",
" Var_1_Cat_1 \n",
" Var_1_Cat_2 \n",
" Var_1_Cat_3 \n",
" Var_1_Cat_4 \n",
" Var_1_Cat_5 \n",
" Var_1_Cat_6 \n",
" Var_1_Cat_7 \n",
" \n",
" \n",
" \n",
" \n",
" 0 \n",
" 22 \n",
" 1.0 \n",
" 4.0 \n",
" False \n",
" True \n",
" True \n",
" False \n",
" True \n",
" False \n",
" False \n",
" ... \n",
" False \n",
" False \n",
" True \n",
" False \n",
" False \n",
" False \n",
" True \n",
" False \n",
" False \n",
" False \n",
" \n",
" \n",
" 1 \n",
" 38 \n",
" 1.0 \n",
" 3.0 \n",
" True \n",
" False \n",
" False \n",
" True \n",
" False \n",
" True \n",
" False \n",
" ... \n",
" True \n",
" False \n",
" False \n",
" False \n",
" False \n",
" False \n",
" True \n",
" False \n",
" False \n",
" False \n",
" \n",
" \n",
" 2 \n",
" 67 \n",
" 1.0 \n",
" 1.0 \n",
" True \n",
" False \n",
" False \n",
" True \n",
" False \n",
" True \n",
" False \n",
" ... \n",
" False \n",
" False \n",
" True \n",
" False \n",
" False \n",
" False \n",
" False \n",
" False \n",
" True \n",
" False \n",
" \n",
" \n",
" 3 \n",
" 67 \n",
" 0.0 \n",
" 2.0 \n",
" False \n",
" True \n",
" False \n",
" True \n",
" False \n",
" True \n",
" False \n",
" ... \n",
" False \n",
" True \n",
" False \n",
" False \n",
" False \n",
" False \n",
" False \n",
" False \n",
" True \n",
" False \n",
" \n",
" \n",
" 4 \n",
" 40 \n",
" 1.0 \n",
" 6.0 \n",
" True \n",
" False \n",
" False \n",
" True \n",
" False \n",
" True \n",
" False \n",
" ... \n",
" False \n",
" True \n",
" False \n",
" False \n",
" False \n",
" False \n",
" False \n",
" False \n",
" True \n",
" False \n",
" \n",
" \n",
"
\n",
"
5 rows × 28 columns
\n",
"
\n",
"
\n",
"
\n"
],
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "dataframe",
"variable_name": "df1"
}
},
"metadata": {},
"execution_count": 11
}
]
},
{
"cell_type": "code",
"source": [
"df1.info()"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "6SCFyiUgCgd5",
"outputId": "1db75544-70f2-4047-cafc-1d39d92ad2d8"
},
"execution_count": 23,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"\n",
"RangeIndex: 8068 entries, 0 to 8067\n",
"Data columns (total 29 columns):\n",
" # Column Non-Null Count Dtype \n",
"--- ------ -------------- ----- \n",
" 0 Age 8068 non-null int64 \n",
" 1 Work_Experience 8068 non-null float64\n",
" 2 Family_Size 8068 non-null float64\n",
" 3 Gender_Female 8068 non-null bool \n",
" 4 Gender_Male 8068 non-null bool \n",
" 5 Ever_Married_No 8068 non-null bool \n",
" 6 Ever_Married_Yes 8068 non-null bool \n",
" 7 Graduated_No 8068 non-null bool \n",
" 8 Graduated_Yes 8068 non-null bool \n",
" 9 Profession_Artist 8068 non-null bool \n",
" 10 Profession_Doctor 8068 non-null bool \n",
" 11 Profession_Engineer 8068 non-null bool \n",
" 12 Profession_Entertainment 8068 non-null bool \n",
" 13 Profession_Executive 8068 non-null bool \n",
" 14 Profession_Healthcare 8068 non-null bool \n",
" 15 Profession_Homemaker 8068 non-null bool \n",
" 16 Profession_Lawyer 8068 non-null bool \n",
" 17 Profession_Marketing 8068 non-null bool \n",
" 18 Spending_Score_Average 8068 non-null bool \n",
" 19 Spending_Score_High 8068 non-null bool \n",
" 20 Spending_Score_Low 8068 non-null bool \n",
" 21 Var_1_Cat_1 8068 non-null bool \n",
" 22 Var_1_Cat_2 8068 non-null bool \n",
" 23 Var_1_Cat_3 8068 non-null bool \n",
" 24 Var_1_Cat_4 8068 non-null bool \n",
" 25 Var_1_Cat_5 8068 non-null bool \n",
" 26 Var_1_Cat_6 8068 non-null bool \n",
" 27 Var_1_Cat_7 8068 non-null bool \n",
" 28 Segmentation 8068 non-null int64 \n",
"dtypes: bool(25), float64(2), int64(2)\n",
"memory usage: 449.2 KB\n"
]
}
]
},
{
"cell_type": "code",
"source": [
"# Label encode the target variable\n",
"from sklearn.preprocessing import LabelEncoder\n",
"\n",
"# Initialize the LabelEncoder\n",
"label_encoder = LabelEncoder()\n",
"\n",
"# Perform label encoding on the 'Segmentation' column\n",
"df['Segmentation'] = label_encoder.fit_transform(df['Segmentation'])\n",
"\n",
"# Mapping of original classes to encoded values\n",
"label_mapping = dict(zip(label_encoder.classes_, label_encoder.transform(label_encoder.classes_)))\n",
"\n",
"df1['Segmentation'] = df['Segmentation']\n",
"df1.head()"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 255
},
"id": "KcPmlQ07-r0k",
"outputId": "351e7cdb-b410-488d-e913-704deda84159"
},
"execution_count": 24,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
" Age Work_Experience Family_Size Gender_Female Gender_Male \\\n",
"0 22 1.0 4.0 False True \n",
"1 38 1.0 3.0 True False \n",
"2 67 1.0 1.0 True False \n",
"3 67 0.0 2.0 False True \n",
"4 40 1.0 6.0 True False \n",
"\n",
" Ever_Married_No Ever_Married_Yes Graduated_No Graduated_Yes \\\n",
"0 True False True False \n",
"1 False True False True \n",
"2 False True False True \n",
"3 False True False True \n",
"4 False True False True \n",
"\n",
" Profession_Artist ... Spending_Score_High Spending_Score_Low \\\n",
"0 False ... False True \n",
"1 False ... False False \n",
"2 False ... False True \n",
"3 False ... True False \n",
"4 False ... True False \n",
"\n",
" Var_1_Cat_1 Var_1_Cat_2 Var_1_Cat_3 Var_1_Cat_4 Var_1_Cat_5 \\\n",
"0 False False False True False \n",
"1 False False False True False \n",
"2 False False False False False \n",
"3 False False False False False \n",
"4 False False False False False \n",
"\n",
" Var_1_Cat_6 Var_1_Cat_7 Segmentation \n",
"0 False False 3 \n",
"1 False False 0 \n",
"2 True False 1 \n",
"3 True False 1 \n",
"4 True False 0 \n",
"\n",
"[5 rows x 29 columns]"
],
"text/html": [
"\n",
" \n",
"
\n",
"\n",
"
\n",
" \n",
" \n",
" \n",
" Age \n",
" Work_Experience \n",
" Family_Size \n",
" Gender_Female \n",
" Gender_Male \n",
" Ever_Married_No \n",
" Ever_Married_Yes \n",
" Graduated_No \n",
" Graduated_Yes \n",
" Profession_Artist \n",
" ... \n",
" Spending_Score_High \n",
" Spending_Score_Low \n",
" Var_1_Cat_1 \n",
" Var_1_Cat_2 \n",
" Var_1_Cat_3 \n",
" Var_1_Cat_4 \n",
" Var_1_Cat_5 \n",
" Var_1_Cat_6 \n",
" Var_1_Cat_7 \n",
" Segmentation \n",
" \n",
" \n",
" \n",
" \n",
" 0 \n",
" 22 \n",
" 1.0 \n",
" 4.0 \n",
" False \n",
" True \n",
" True \n",
" False \n",
" True \n",
" False \n",
" False \n",
" ... \n",
" False \n",
" True \n",
" False \n",
" False \n",
" False \n",
" True \n",
" False \n",
" False \n",
" False \n",
" 3 \n",
" \n",
" \n",
" 1 \n",
" 38 \n",
" 1.0 \n",
" 3.0 \n",
" True \n",
" False \n",
" False \n",
" True \n",
" False \n",
" True \n",
" False \n",
" ... \n",
" False \n",
" False \n",
" False \n",
" False \n",
" False \n",
" True \n",
" False \n",
" False \n",
" False \n",
" 0 \n",
" \n",
" \n",
" 2 \n",
" 67 \n",
" 1.0 \n",
" 1.0 \n",
" True \n",
" False \n",
" False \n",
" True \n",
" False \n",
" True \n",
" False \n",
" ... \n",
" False \n",
" True \n",
" False \n",
" False \n",
" False \n",
" False \n",
" False \n",
" True \n",
" False \n",
" 1 \n",
" \n",
" \n",
" 3 \n",
" 67 \n",
" 0.0 \n",
" 2.0 \n",
" False \n",
" True \n",
" False \n",
" True \n",
" False \n",
" True \n",
" False \n",
" ... \n",
" True \n",
" False \n",
" False \n",
" False \n",
" False \n",
" False \n",
" False \n",
" True \n",
" False \n",
" 1 \n",
" \n",
" \n",
" 4 \n",
" 40 \n",
" 1.0 \n",
" 6.0 \n",
" True \n",
" False \n",
" False \n",
" True \n",
" False \n",
" True \n",
" False \n",
" ... \n",
" True \n",
" False \n",
" False \n",
" False \n",
" False \n",
" False \n",
" False \n",
" True \n",
" False \n",
" 0 \n",
" \n",
" \n",
"
\n",
"
5 rows × 29 columns
\n",
"
\n",
"
\n",
"
\n"
],
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "dataframe",
"variable_name": "df1"
}
},
"metadata": {},
"execution_count": 24
}
]
},
{
"cell_type": "code",
"source": [
"print(label_mapping)"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "HPq0Lbrr_e8G",
"outputId": "14a7a43c-0531-427f-be55-28ead25d8865"
},
"execution_count": 25,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"{0: 0, 1: 1, 2: 2, 3: 3}\n"
]
}
]
},
{
"cell_type": "code",
"source": [
"# Separating dependent-independent variables\n",
"X = df1.drop(['Segmentation'], axis=1)\n",
"y = df1['Segmentation']\n",
"X.head(2)"
],
"metadata": {
"id": "LukvBGXgQEbA",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 161
},
"outputId": "fc4095d3-5d42-42a1-a756-9e27a7d8cfad"
},
"execution_count": 16,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
" Age Work_Experience Family_Size Gender_Female Gender_Male \\\n",
"0 22 1.0 4.0 False True \n",
"1 38 1.0 3.0 True False \n",
"\n",
" Ever_Married_No Ever_Married_Yes Graduated_No Graduated_Yes \\\n",
"0 True False True False \n",
"1 False True False True \n",
"\n",
" Profession_Artist ... Spending_Score_Average Spending_Score_High \\\n",
"0 False ... False False \n",
"1 False ... True False \n",
"\n",
" Spending_Score_Low Var_1_Cat_1 Var_1_Cat_2 Var_1_Cat_3 Var_1_Cat_4 \\\n",
"0 True False False False True \n",
"1 False False False False True \n",
"\n",
" Var_1_Cat_5 Var_1_Cat_6 Var_1_Cat_7 \n",
"0 False False False \n",
"1 False False False \n",
"\n",
"[2 rows x 28 columns]"
],
"text/html": [
"\n",
" \n",
"
\n",
"\n",
"
\n",
" \n",
" \n",
" \n",
" Age \n",
" Work_Experience \n",
" Family_Size \n",
" Gender_Female \n",
" Gender_Male \n",
" Ever_Married_No \n",
" Ever_Married_Yes \n",
" Graduated_No \n",
" Graduated_Yes \n",
" Profession_Artist \n",
" ... \n",
" Spending_Score_Average \n",
" Spending_Score_High \n",
" Spending_Score_Low \n",
" Var_1_Cat_1 \n",
" Var_1_Cat_2 \n",
" Var_1_Cat_3 \n",
" Var_1_Cat_4 \n",
" Var_1_Cat_5 \n",
" Var_1_Cat_6 \n",
" Var_1_Cat_7 \n",
" \n",
" \n",
" \n",
" \n",
" 0 \n",
" 22 \n",
" 1.0 \n",
" 4.0 \n",
" False \n",
" True \n",
" True \n",
" False \n",
" True \n",
" False \n",
" False \n",
" ... \n",
" False \n",
" False \n",
" True \n",
" False \n",
" False \n",
" False \n",
" True \n",
" False \n",
" False \n",
" False \n",
" \n",
" \n",
" 1 \n",
" 38 \n",
" 1.0 \n",
" 3.0 \n",
" True \n",
" False \n",
" False \n",
" True \n",
" False \n",
" True \n",
" False \n",
" ... \n",
" True \n",
" False \n",
" False \n",
" False \n",
" False \n",
" False \n",
" True \n",
" False \n",
" False \n",
" False \n",
" \n",
" \n",
"
\n",
"
2 rows × 28 columns
\n",
"
\n",
"
\n",
"
\n"
],
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "dataframe",
"variable_name": "X"
}
},
"metadata": {},
"execution_count": 16
}
]
},
{
"cell_type": "code",
"source": [
"# import the train-test split\n",
"from sklearn.model_selection import train_test_split\n",
"\n",
"# divide into train and test sets\n",
"trainX, testX, train_y, test_y = train_test_split(X,y, train_size = 0.8, random_state = 101, stratify=y)\n",
"trainX.shape, trainY.shape, testX.shape, testY.shape"
],
"metadata": {
"id": "D4s7KWkVQItR",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "55fbb4d1-fadc-432b-b19f-a5d6594dc78b"
},
"execution_count": 29,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"((6454, 28), (6454,), (1614, 28), (1614,))"
]
},
"metadata": {},
"execution_count": 29
}
]
},
{
"cell_type": "code",
"source": [
"# Correlation matrix\n",
"# Select numeric columns (float64 and int64 types) from the dataset\n",
"numeric_cols = df1.drop(columns=['Segmentation']).select_dtypes(include=['float64', 'int64']).columns\n",
"# Extract only numeric columns into a new dataframe\n",
"df_numeric_only = df1[numeric_cols]\n",
"\n",
"plt.figure(figsize=(7,5))\n",
"sns.heatmap(df_numeric_only.corr(method='spearman').round(2),linewidth = 0.5,annot=True,cmap=\"YlGnBu\")\n",
"plt.show()\n"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 451
},
"id": "kdoBZMfNRXoK",
"outputId": "5577d291-36ed-4a3b-8f12-665005158ab0"
},
"execution_count": 30,
"outputs": [
{
"output_type": "display_data",
"data": {
"text/plain": [
""
],
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAjcAAAGyCAYAAAAYveVYAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjcuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/bCgiHAAAACXBIWXMAAA9hAAAPYQGoP6dpAABUdUlEQVR4nO3deVxU1d8H8M8AMsAIAiprCIILoaCo4a5gKFq5ZKWZue/70mKUgrjhz9RMJUnFXFLTrMw9jcQVd3HFXUJRFEViEdnmPH/4ODoCIwwXxxk/7+d1X0/33HPP/d5hfvjlLPfKhBACRERERAbCSNcBEBEREUmJyQ0REREZFCY3REREZFCY3BAREZFBYXJDREREBoXJDRERERkUJjdERERkUJjcEBERkUFhckNEREQGhckNERERGRQmN0RERFRie/fuRceOHeHk5ASZTIaNGze+8JyYmBg0aNAAcrkcNWrUwPLly8s1RiY3REREVGJZWVmoV68eIiIiSlT/+vXrePfddxEQEIC4uDiMHTsWAwcOxF9//VVuMcr44kwiIiLShkwmwx9//IEuXboUW2fChAnYunUrzp49qyr7+OOPkZaWhh07dpRLXOy5ISIieo3l5OQgPT1dbcvJyZGs/djYWAQGBqqVBQUFITY2VrJrPM+k3FomIiKicmFerYdkbU3oXxthYWFqZaGhoZg8ebIk7ScnJ8Pe3l6tzN7eHunp6cjOzoa5ubkk13nWK5XcSPnDotdPduJapOVu13UYpKesTTsAuKTrMEiv1dJ1AFoJDg7G+PHj1crkcrmOopHGK5XcEBER0YvJZNLNKpHL5eWazDg4OODOnTtqZXfu3IGVlVW59NoATG6IiIj0jkyPpsw2bdoU27ZtUyvbtWsXmjZtWm7X1J9Ph4iIiHQuMzMTcXFxiIuLA/B4qXdcXBwSExMBPB7m6t27t6r+0KFDce3aNXz55Ze4cOECfvjhB6xfvx7jxo0rtxjZc0NERKRnpByWKq1jx44hICBAtf9kvk6fPn2wfPly3L59W5XoAED16tWxdetWjBs3Dt9//z3eeOMNLF26FEFBQeUW4yv1nBtOKKay4IRiKgtOKKaye3kTii2r95OsrYzrP0nW1quCw1JERERkUDgsRUREpGdkMpmuQ3ilMbkhIiLSOxx40YSfDhERERkU9twQERHpGV2ultIHTG6IiIj0DJMbzfjpEBERkUFhzw0REZGe0afXL+gCkxsiIiI9w2EpzfjpEBERkUFhzw0REZGeYc+NZkxuiIiI9AyTG8346RAREZFBYc8NERGRnpGB75bShMkNERGRnuGwlGb8dIiIiMigsOeGiIhIz7DnRjMmN0RERHqGyY1m/HSIiIjIoLDnhoiISO+wb0ITJjdERER6hsNSmvHTISIiIoPCnhsiIiI9w54bzZjcEBER6RkZB1404qdDREREBoU9N0RERHqGw1KaMbkhIiLSMzIZX5ypCVM/IiIiMijsuSEiItIzHJbSjMkNERGRnuFqKc346RAREZFBYc8NERGRnuGwlGZMboiIiPQMkxvN+OkQERGRQWHPDRERkZ7hhGLNmNwQERHpGw5LacRPh4iIiAwKe26IiIj0DCcUa8bkhoiISM/w3VKaMfUjIiIig8KeGyIiIj3D1VKaMbkhIiLSM5xzoxk/HSIiIjIo7LkhIiLSN5xQrBGTGyIiIn3DcReN+PEQERGRQdE6ucnNzcXFixeRn58vZTxERET0IjKZdJsBKnVy8/DhQwwYMAAWFhaoU6cOEhMTAQCjRo3CzJkzJQ+QiIiInsPkRqNSJzfBwcE4deoUYmJiYGZmpioPDAzEunXrJA2OiIiIqLRKndxs3LgRCxcuRIsWLdQe/1ynTh1cvXpV0uCIiIioCEYSblqIiIiAm5sbzMzM0LhxYxw5ckRj/Xnz5qF27dowNzeHi4sLxo0bh0ePHml38RIo9WqplJQU2NnZFSrPysriuy6IiIheAqHDf2/XrVuH8ePHIzIyEo0bN8a8efMQFBSEixcvFpkfrFmzBl999RWWLVuGZs2a4dKlS+jbty9kMhnmzp1bLjGWOmdr1KgRtm7dqtp/ktAsXboUTZs2lS4yIiIieuXMnTsXgwYNQr9+/eDl5YXIyEhYWFhg2bJlRdY/ePAgmjdvjk8++QRubm5o164devTo8cLenrIodc/NjBkz0KFDB5w/fx75+fn4/vvvcf78eRw8eBB79uwpjxhfW839PDFu6Hto4O0OR3sbdBs4B5t3HtN1WPSKEEJgccR2/PnbIWRmZMOnfnV8OekjVHOtWuw5J49dxc/L/8GF8zdwLyUds+b1R+u3fYqtP3PKevzx60GM/bILevTyL4e7IF0RQmD+/NX49dedSE/PQoMGb2Ly5OFwc3Mq9pwff/wVO3cexLVrSTAzM4Wvryc+/7wv3N3fUNVJSXmAWbOW4eDBOGRlZaN6dWcMHdoNQUHNX8ZtvT4k7LjJyclBTk6OWplcLodcLi9UNzc3F8ePH0dwcLCqzMjICIGBgYiNjS2y/WbNmuHnn3/GkSNH4Ofnh2vXrmHbtm3o1auXdDfxnFL33LRo0QJxcXHIz8+Ht7c3du7cCTs7O8TGxqJhw4blEeNrS2Ehx5nziRg7sehsmF5vq5ZFY/2avZgw6SNErR4HM3NTjBkSiZycvGLPyc7OQc1aTvjimw9f2H5M9GmcPZ2AqnaVpAybXhFLlvyGVau2YPLk4Vi/fjbMzc0wYEAIcnJyiz3nyJGz6NnzXaxf/y1++mkq8vMLMGBACB4+fDp3YsKEubh+PQmLFk3C5s0L0bZtM4wdOwvnz3NOpqSMZJJt4eHhqFSpktoWHh5e5GXv3buHgoIC2Nvbq5Xb29sjOTm5yHM++eQTTJkyBS1atECFChXg4eEBf39/fP3115J/LE9o9YRiDw8PLFmyROpY6Dk7Y05hZ8wpXYdBryAhBH75eS/6DW6H1m28AQCTZ/REB/9J2PPPGbTr0KDI85q19EKzll4vbP/unTTMnvEb5v84FONHLJY0dtI9IQRWrtyEYcO6ITCwCQBg1qxxaNasF/7++xDefbdVkedFRYWp7c+cORZNm36Kc+eu4K236gIATp68gNDQYfDxqQUAGD68O1as+BPnzl2Bl5dHOd4VaSs4OBjjx49XKyuq10ZbMTExmDFjBn744Qc0btwYV65cwZgxYzB16lRMmjRJsus8q9TJTXp6epHlMpkMcrkcpqamZQ6KiDS7dfM+7t9Lh1+TWqqyipbmqOPtijOnEopNbkpCqVRi8ter8Wm/NnCv4ShFuPSKuXnzDlJSHqBZs/qqMktLBerVq4WTJy8Um9w8LyMjCwBQqZKlqszX1xPbt++Dv/9bsLJSYPv2/cjJyYWfn7ek9/Dak3BCcXFDUEWpUqUKjI2NcefOHbXyO3fuwMHBochzJk2ahF69emHgwIEAAG9vb2RlZWHw4MH45ptvYGQk/csSSt2itbU1bGxsCm3W1tYwNzeHq6srQkNDoVQqi20jJycH6enpatvz431EVLz79zMAALaVLdXKbStbIvVe0X+AlNTKZdEwNjZC954l+weO9E9KygMAQOXK1mrllStb4969ByVqQ6lUYsaMJWjQ4E3UquWqKp83bwLy8wvQuPEn8PbuipCQCCxc+DVcXYufy0NakEm4lYKpqSkaNmyI6OhoVZlSqUR0dHSxi4oePnxYKIExNjYG8LgXsTyUuudm+fLl+Oabb9C3b1/4+fkBAI4cOYIVK1Zg4sSJSElJwezZsyGXy4sdTwsPD0dYmHr3ZmhoqBbhE70edmw5hplT1qv250YMLpfrxJ+7gXU/78XK9Z/z0Q4GZNOmGISGRqj2f/wxpMxthoVF4vLlRKxZ8z+18u+/X4309CwsXz4NNjZW+PvvQxg7dhZWr56J2rXdynxd0r3x48ejT58+aNSoEfz8/DBv3jxkZWWhX79+AIDevXvD2dlZNW+nY8eOmDt3Lnx9fVXDUpMmTULHjh1VSY7USp3crFixAnPmzEG3bt1UZR07doS3tzd+/PFHREdHo1q1apg+fXqxyU1x43v/W9a3tOEQvRZaBtRFHZ+nfx3n5T5+p1vq/QxUqfp0wm/q/QzU9HTW+jpxJ67iQWomOrd7+sdHQYES82f/iXU/78HGv/hHiD5q08YP9eo9HcLMzX086fz+/TTY2dmqyu/fT4Onp/sL25syJRIxMUfx88/hcHCooipPTLyNn3/egi1bFqJmzcffV0/P6jh27BxWr96KKVNGSHVLZKS7Pz66d++OlJQUhISEIDk5GfXr18eOHTtUk4wTExPVemomTpwImUyGiRMnIikpCVWrVkXHjh0xffr0coux1MnNwYMHERkZWajc19dXtQysRYsWqndOFaU043tEBCgUZlAonr7uRAiBylWscPTwZdTyfLwMNzPzEc6d+Rddu2u/5Padjm/Br0lttbIxQyPR4b1GeK+Ln9btkm5VrGiBihUtVPtCCFStaoPY2FN4883HyUxm5kOcOnUJPXq8U2w7QghMnfojdu2KxapV4XBxUZ9jkZ39eHpB4SEIo3Ibfnht6bhndeTIkRg5cmSRx2JiYtT2TUxMEBoa+lJHaEo958bFxQVRUVGFyqOiouDi4gIAuH//PmxsbMoe3WtOYSGHj5crfLwe/wXk5lIVPl6ucHGqrOPISNdkMhk+/rQVfvpxJ/buPosrl24h7OufUaVqJdXqKQAYMTACv67Zp9p/+DAHly7cxKULNwEAt5JScenCTSTffjzPopK1Ah41HdU2ExMj2FaxhGt19aWfpL9kMhl69+6ERYvWITr6MC5eTMCXX86FnZ2tavUUAPTp8w1+/nmLaj8sbBE2bYrBnDmfQ6EwR0rKA6SkPMCjR4+TGnf3N+Dq6oiQkAicPn0JiYm3sWzZHzhwIE6tXaLyVuqem9mzZ+Ojjz7C9u3b8dZbbwEAjh07hvj4ePz2228AgKNHj6J79+7SRvoaauDjjp3rn46NzwrtDQBY9eseDP6scO8ZvV569X8b2dm5CA9bh8yMbNTzdcf3kUMgl1dQ1Um6cQ9paZmq/fhziRje/+nci3nfbgQAvNvpLYRM7/nSYifdGzToA2RnP0JIyEKkp2ehYUMvLF0aBrn86YrXGzeS8eDB0wnqa9duBwD06qU+5SA8fAy6dg1EhQomWLx4MubMWY6hQ6fi4cNsVKvmiJkzx6J160Yv58ZeF5wSp5FMaNFXmJCQgMjISFy6dAkAULt2bQwZMgSZmZmoW7eu1sGYV+uh9blE2YlrkZa7XddhkJ6yNu0A4JKuwyC9VuvFVSRSs710D3e9vKO/ZG29KrR6iJ+bmxtmzpwJ4PFzb9auXYvu3bvj2LFjKCgokDRAIiIiotLQ+sk5e/fuRZ8+feDk5IQ5c+YgICAAhw4dkjI2IiIiKoqOnnOjL0rVc5OcnIzly5cjKioK6enp6NatG3JycrBx40Z4eb34ke5ERERUdoLPodKoxD03HTt2RO3atXH69GnMmzcPt27dwoIFC8ozNiIiIqJSK3HPzfbt2zF69GgMGzYMNWvWLM+YiIiISBMdPsRPH5S452b//v3IyMhAw4YN0bhxYyxcuBD37t0rz9iIiIioKJxzo1GJk5smTZpgyZIluH37NoYMGYJffvkFTk5OUCqV2LVrFzIyMsozTiIiIqISKfVqKYVCgf79+2P//v04c+YMPvvsM8ycORN2dnbo1KlTecRIREREz5LJpNsMkNZLwYHHD++bNWsWbt68ibVr10oVExEREWliJJNuM0BlSm6eMDY2RpcuXbBp0yYpmiMiIiLSmlZPKCYiIiIdMswOF8kwuSEiItI3BjpXRiqSDEsRERERvSrYc0NERKRv2HOjEZMbIiIifcNxF4348RAREZFBYc8NERGRvuGwlEZMboiIiPQNcxuNOCxFREREBoU9N0RERHpGGOhrE6TC5IaIiEjfcM6NRhyWIiIiIoPCnhsiIiJ9w44bjZjcEBER6RvOudGIw1JERERkUNhzQ0REpG84oVgjJjdERET6hrmNRhyWIiIiIoPCnhsiIiJ9wwnFGjG5ISIi0jdMbjTisBQREREZFPbcEBER6RnBjhuNmNwQERHpGw5LacRhKSIiIjIo7LkhIiLSN3yIn0ZMboiIiPQNh6U04rAUERERGRT23BAREekbdk1oxOSGiIhI33DOjUbM/YiIiMigsOeGiIhI33BCsUZMboiIiPSM4LCURhyWIiIiIoPCnhsiIiJ9w64JjZjcEBER6RvOudGIuR8REREZFCY3RERE+kYmk27TQkREBNzc3GBmZobGjRvjyJEjGuunpaVhxIgRcHR0hFwuR61atbBt2zatrl0SHJYiIiLSNzocllq3bh3Gjx+PyMhING7cGPPmzUNQUBAuXrwIOzu7QvVzc3PRtm1b2NnZYcOGDXB2dsa///4La2vrcouRyQ0RERGV2Ny5czFo0CD069cPABAZGYmtW7di2bJl+OqrrwrVX7ZsGVJTU3Hw4EFUqFABAODm5lauMXJYioiISN/IpNtycnKQnp6utuXk5BR52dzcXBw/fhyBgYGqMiMjIwQGBiI2NrbIczZt2oSmTZtixIgRsLe3R926dTFjxgwUFBRI8EEUjckNERGRnhFGMsm28PBwVKpUSW0LDw8v8rr37t1DQUEB7O3t1crt7e2RnJxc5DnXrl3Dhg0bUFBQgG3btmHSpEmYM2cOpk2bJvnn8gSHpYiIiF5jwcHBGD9+vFqZXC6XrH2lUgk7OzssXrwYxsbGaNiwIZKSkvDtt98iNDRUsus8i8kNERGRvpFwQrFcLi9xMlOlShUYGxvjzp07auV37tyBg4NDkec4OjqiQoUKMDY2VpW9+eabSE5ORm5uLkxNTbUPvhgcliIiItI3OloKbmpqioYNGyI6OlpVplQqER0djaZNmxZ5TvPmzXHlyhUolUpV2aVLl+Do6FguiQ3A5IaIiIhKYfz48ViyZAlWrFiB+Ph4DBs2DFlZWarVU71790ZwcLCq/rBhw5CamooxY8bg0qVL2Lp1K2bMmIERI0aUW4wcliIiItI3Ouya6N69O1JSUhASEoLk5GTUr18fO3bsUE0yTkxMhJHR0wBdXFzw119/Ydy4cfDx8YGzszPGjBmDCRMmlFuMMiGEKLfWS8m8Wg9dh0B6LDtxLdJyt+s6DNJT1qYdAFzSdRik12q9tCu5he6QrK2EsPaStfWq4LAUERERGZRXalgqO3GtrkMgPff4r28ibb28v7yJyoRvBdfolUpuOKRAZWFt2oFDm6S17MS1SMjYrOswSI+5WXZ8eRdjcqMRh6WIiIjIoLxSPTdERET0YqKUz6d53TC5ISIi0jccd9GIHw8REREZFPbcEBER6RsOS2lU5uTm0aNHMDMzkyIWIiIiKgmultJIq2EppVKJqVOnwtnZGRUrVsS1a9cAAJMmTUJUVJSkARIRERGVhlbJzbRp07B8+XLMmjVL7Y2edevWxdKlSyULjoiIiIpgJJNuM0BaJTcrV67E4sWL0bNnTxgbG6vK69WrhwsXLkgWHBERERVBJuFmgLRKbpKSklCjRo1C5UqlEnl5eWUOioiIiEhbWiU3Xl5e2LdvX6HyDRs2wNfXt8xBERERUfGEkUyyzRBptVoqJCQEffr0QVJSEpRKJX7//XdcvHgRK1euxJYtW6SOkYiIiKjEtOq56dy5MzZv3oy///4bCoUCISEhiI+Px+bNm9G2bVupYyQiIqJnyWTSbQZI6+fctGzZErt27ZIyFiIiIioJAx1OkopWPTdHjx7F4cOHC5UfPnwYx44dK3NQRERERNrSKrkZMWIEbty4Uag8KSkJI0aMKHNQREREpAGXgmuk1bDU+fPn0aBBg0Llvr6+OH/+fJmDIiIiouIZ8bXXGmn18cjlcty5c6dQ+e3bt2FiwndxEhERke5oldy0a9cOwcHB+O+//1RlaWlp+Prrr7laioiIqJxxsZRmWnWzzJ49G61atYKrq6vqoX1xcXGwt7fHqlWrJA2QiIiI1BlqUiIVrZIbZ2dnnD59GqtXr8apU6dgbm6Ofv36oUePHqhQoYLUMRIRERGVmNYTZBQKBQYPHixlLERERFQCMnbdaKR1cnP58mXs3r0bd+/ehVKpVDsWEhJS5sCIiIioaMxtNNMquVmyZAmGDRuGKlWqwMHBQS2DlMlkTG6IiIhIZ7RKbqZNm4bp06djwoQJUsdDREREL8CeG820Sm4ePHiAjz76SOpYiIiIqARkfIifRlp9PB999BF27twpdSxEREREZaZVz02NGjUwadIkHDp0CN7e3oWWf48ePVqS4IiIiKgwDktpplVys3jxYlSsWBF79uzBnj171I7JZDImN0REROXIiMmNRlolN9evX5c6DiIiIiJJlGlKUm5uLi5evIj8/Hyp4iEiIqIX4LulNNMquXn48CEGDBgACwsL1KlTB4mJiQCAUaNGYebMmZIGSEREROqY3GimVXITHByMU6dOISYmBmZmZqrywMBArFu3TrLgiIiIiEpLqzk3GzduxLp169CkSRO1pxPXqVMHV69elSw4IiIiKozvltJMq+QmJSUFdnZ2hcqzsrL4gRMREZUzPsRPM60+nkaNGmHr1q2q/ScJzdKlS9G0aVNpIiMiIiLSglY9NzNmzECHDh1w/vx55Ofn4/vvv8f58+dx8ODBQs+9ISIiImlxkEQzrXpuWrRogbi4OOTn58Pb2xs7d+6EnZ0dYmNj0bBhQ6ljJCIiomdwtZRmWvXcAICHhweWLFkiZSxEREREZVbi5CY9PR1WVlaq/9bkST0iIiKSnqH2uEilxMmNjY0Nbt++DTs7O1hbWxe5KkoIAZlMhoKCAkmDJCIioqf4binNSpzc/PPPP7C1tQUA7N69u9wCIiIiIiqLEic3rVu3BgDk5+djz5496N+/P954441yC4yIiIiKxmEpzUq9WsrExATffvstX5ZJRESkI1wtpZlWS8HbtGnD59kQERHRK0mrpeAdOnTAV199hTNnzqBhw4ZQKBRqxzt16iRJcERERFSYjDOKNdIquRk+fDgAYO7cuYWOcbUUERFR+dL1cFJERAS+/fZbJCcno169eliwYAH8/PxeeN4vv/yCHj16oHPnzti4cWO5xafVsJRSqSx2Y2JDRERkuNatW4fx48cjNDQUJ06cQL169RAUFIS7d+9qPC8hIQGff/45WrZsWe4xlvm9oo8ePZIiDiIiIiohXU4onjt3LgYNGoR+/frBy8sLkZGRsLCwwLJly4o9p6CgAD179kRYWBjc3d3LcOclo1VyU1BQgKlTp8LZ2RkVK1bEtWvXAACTJk1CVFSUpAESERGROl0lN7m5uTh+/DgCAwNVZUZGRggMDERsbGyx502ZMgV2dnYYMGCAtrdcKlolN9OnT8fy5csxa9YsmJqaqsrr1q2LpUuXShYcERERla+cnBykp6erbTk5OUXWvXfvHgoKCmBvb69Wbm9vj+Tk5CLP2b9/P6Kiol7q+yi1Sm5WrlyJxYsXo2fPnjA2NlaV16tXDxcuXJAsOCIiIirMSCbdFh4ejkqVKqlt4eHhksSZkZGBXr16YcmSJahSpYokbZaEVqulkpKSUKNGjULlSqUSeXl5ZQ6KiIiIiiflaqng4GCMHz9erUwulxdZt0qVKjA2NsadO3fUyu/cuQMHB4dC9a9evYqEhAR07NhRVaZUKgE8fijwxYsX4eHhUdZbKESrnhsvLy/s27evUPmGDRvg6+tb5qCIiIjo5ZDL5bCyslLbiktuTE1N0bBhQ0RHR6vKlEoloqOj0bRp00L1PT09cebMGcTFxam2Tp06ISAgAHFxcXBxcSmXe9Kq5yYkJAR9+vRBUlISlEolfv/9d1y8eBErV67Eli1bpI6RiIiIniEr81pn7Y0fPx59+vRBo0aN4Ofnh3nz5iErKwv9+vUDAPTu3RvOzs4IDw+HmZkZ6tatq3a+tbU1ABQql5JWyU3nzp2xefNmTJkyBQqFAiEhIWjQoAE2b96Mtm3bSh0jERERPUOXD/Hr3r07UlJSEBISguTkZNSvXx87duxQTTJOTEyEkZEOsy8AMiGE0GkEz0jL3a7rEEiPWZt2gHm1HroOg/RUduJaJGRs1nUYpMfcLDu+uJJEWm7aL1lb+zq1kKytV4VWPTdPHDt2DPHx8QAez8Np2LChJEG9DoQQWByxHX/+dgiZGdnwqV8dX076CNVcqxZ7zsljV/Hz8n9w4fwN3EtJx6x5/dH6bZ9i68+csh5//HoQY7/sgh69/MvhLuhV19zPE+OGvocG3u5wtLdBt4FzsHnnMV2HRa8AIQRW/vgXdvxxGJmZ2fCqVx2jv+oK52rF/w4CgE3rD2DDqhik3s+Ae01HDP/ifXjWraZW5/zpBCz/YTsunE2EsbER3Gs5YcaCwZCbVSjHO3q9yHT9/oVXnFb9Rjdv3kTLli3h5+eHMWPGYMyYMXjrrbfQokUL3Lx5U+oYDdKqZdFYv2YvJkz6CFGrx8HM3BRjhkQiJ6f41WbZ2TmoWcsJX3zz4Qvbj4k+jbOnE1DVrpKUYZOeUVjIceZ8IsZOLP7JofR6Wr9iN/78ZT9GBX+A75ePhpmZKb4etQS5Gn4HxeyMw+LvNqHnoLaI+Hks3Gs54ZtRS5CWmqGqc/50Ar4ZtRQNm9TG/BVjMH/FGHTq1pwvepSYLp9QrA+0Sm4GDhyIvLw8xMfHIzU1FampqYiPj4dSqcTAgQOljtHgCCHwy8970W9wO7Ru442atZ0weUZP3Ev5D3v+OVPsec1aemHo6Hfhr6G3BgDu3knD7Bm/YcrMXjAx0e24J+nWzphTCJu9Hpv+Ym8NPSWEwMa1+9BjQCCa+deFe00nfDnlY9xPScfBmLPFnvf76j1o36Uxgjr5wdXdAaODP4DcrAL+2nRUVefHuZvQ5eMW6N63Ddw8HODiZofWbevD1LRMAwVEpaLVv3x79uzBokWLULt2bVVZ7dq1sWDBAuzdu1ey4AzVrZv3cf9eOvya1FKVVbQ0Rx1vV5w5lVCmtpVKJSZ/vRqf9msD9xqOZYyUiAxRclIqUu9noIFfTVWZoqI5POtWQ/yZf4s8Jy8vH5cvJKFB46e/t4yMjODrVxPnTz8+Jy01AxfOJsLapiLG9l+A7u0m4/PBP+Bs3PXyvaHXEHtuNNMquXFxcSnyYX0FBQVwcnIqc1CG7v79x124tpUt1cptK1si9V56mdpeuSwaxsZG6N6zVZnaISLDlfr/v4Osn/sdZG1bUXXseelpWVAWKGFtW1Gt3MbWEg/uP/69dTspFQCwaslOdOjSGNPnD0KN2s74algkkhJTpL6N1xqTG8206if89ttvMWrUKERERKBRo0YAHk8uHjNmDGbPnv3C83Nycgq9t0IulwMG+iHv2HIMM6esV+3PjRhcLteJP3cD637ei5XrP+dkMyJS+Wf7CXw/Y4Nqf+q88nl5oVL5ePHtO12bIKiTHwCghqcz4o5ewV+bjqL/yHfK5bpEz9Mquenbty8ePnyIxo0bw8TkcRP5+fkwMTFB//790b9/f1Xd1NTUQueHh4cjLCxMrSw0NBRjv26sTTivvJYBdVHHx1W1n5ebD+DxX09Vqj6d8Jt6PwM1PZ21vk7ciat4kJqJzu2efrYFBUrMn/0n1v28Bxv/CtW6bSLSX01aeaF23aeP13/yOyjtfgYqV7FSlaelZsKjVtG971bWChgZGyEtNVOt/EFqBmwqP26jcpXHPUGu1dVfquhS3Q53kx+U/UZIhfOzNdMquZk3b16ZLlrceyyy8U+Z2n1VKRRmUCjMVPtCCFSuYoWjhy+jlucbAIDMzEc4d+ZfdO3eXOvrvNPxLfg1qa1WNmZoJDq81wjvdfHTul0i0m8WCjNYPPc7yLayJU4evQyP2o//oMrKfIQLZxPx3geFH6EPABUqmKCmpzNOHrmMZv6PnyyrVCoRd/QKOnV7/HvL3skWlata4ea/6kNQSf+moFFzz/K4tdcWkxvNtEpu+vTpU6aLyuXyIt9bkZ1bpmb1hkwmw8eftsJPP+6ES7WqcHK2xY8Lt6FK1Upo3cZbVW/EwAj4t/HBR5+0BAA8fJiDm8+MW99KSsWlCzdhVUkBB0cbVLJWoJK1Qu1aJiZGsK1iWegvKXo9KCzk8HB7+jI7N5eq8PFyxYO0TNy4dV+HkZEuyWQydOnREmujouHsUhUOzrZYsWgHKle1UiUuADBhWCSa+ddF5+6PH/LWtWdrzJ78C2p5vYHadarhjzX78Cg7F+06vqVq98Ne/lj1406413SEe21n/L3lGG78excTZ/XWyb3S60mr5Gb58uXo27dvofL8/HxMmjRJslelG7Je/d9GdnYuwsPWITMjG/V83fF95BDI5U8fcpV04x7S0p52AcefS8Tw/hGq/XnfbgQAvNvpLYRM7/nSYif90cDHHTvXh6j2Z4U+/gdm1a97MPizSF2FRa+Abn0C8OhRLr6fsQGZGdmoU786ps8fBNNnfgfdvnkf6WlZqn3/dvXx34NMrIz8Cw/uZ8C9lhOmLxgIm2cmJnf9pBXycvMR+d0mZPz3EO61nBAeMQROb1R5qfdn6Ixkr8zLBV5JWr1+wcrKCkFBQVi8eDFsbGwAABcvXsQnn3yC+/fvIyEhQatg+PoFKgu+foHKgq9foLJ6ma9f6LBTutcvbG9neK9f0Gop+MmTJ3Hz5k14e3tj165diIiIQIMGDeDp6YlTp05JHSMRERFRiWk1LOXh4YEDBw5g7NixaN++PYyNjbFixQr06MG/momIiMobnz2vmdafz9atW/HLL7+gadOmsLa2RlRUFG7duiVlbERERFQEI5mQbDNEWiU3Q4YMwUcffYQJEyZg3759OH36NExNTeHt7Y3169e/uAEiIiKicqLVsNSBAwdw+PBh1KtXDwDg4OCAbdu2ISIiAv3790e3bt0kDZKIiIie4nNuNNMquTl+/HiRz6kZMWIEAgMDyxwUERERFY9zbjQr1edz9+5dACgysQEeP+fmv//+K3tURERERFoqVXLj6OioSnAAwNvbGzdu3FDt379/H02bFv3obiIiIpKGkUy6zRCValjq+ef9JSQkIC8vT2MdIiIikpbMQFc5SUXyYTuZzEDTQCIiItILWk0oJiIiIt0x1OEkqZQquZHJZMjIyICZmRmEEJDJZMjMzER6ejoAqP4/ERERlR+ultKs1HNuatWqpbbv6+urts9hKSIiItKlUiU3u3fvLq84iIiIqIQM9bUJUilVctO6detSNT5z5kwMHToU1tbWpTqPiIiIisc5N5qV67DdjBkzkJqaWp6XICIiIlJTrqul+MwbIiIi6XFCsWZcCk5ERKRnOCylGZM/IiIiMijsuSEiItIzXC2lGZMbIiIiPcNhKc0kH5bKzs5W/XfLli1hbm4u9SWIiIiIiqVVcjN69Ogiy7OysvDOO++o9rdt2wZHR0ftIiMiIqIiGUm4GSKthqW2bt0KGxsbhIWFqcqysrLQvn17yQIjIiKionHOjWZaJTc7d+5Ey5YtYWNjg7FjxyIjIwNBQUEwMTHB9u3bpY6RiIiIqMS0Sm48PDywY8cOBAQEwMjICGvXroVcLsfWrVuhUCikjpGIiIiewQnFmmm9WsrHxwdbtmxB27Zt0bhxY2zZsoWTh4mIiF4CJjealTi58fX1hUxW+NOUy+W4desWmjdvrio7ceKENNERERERlVKJk5suXbqUYxhERERUUoa6ykkqJU5uQkNDAQAFBQU4cOAAfHx8YG1tXV5xERERUTG4WkqzUid/xsbGaNeuHR48eFAe8RARERGViVY9W3Xr1sW1a9ekjoWIiIhKwEgm3WaItEpupk2bhs8//xxbtmzB7du3kZ6errYRERFR+eETijXTain4k1csdOrUSW0FlRACMpkMBQUF0kRHREREVEpaJTe7d++WOg4iIiIqIUMdTpKKVslN69atpY6DiIiISkjG1VIaaf2E4rS0NERFRSE+Ph4AUKdOHfTv3x+VKlWSLDgiIiKi0tJqLtGxY8fg4eGB7777DqmpqUhNTcXcuXPh4eHBpxMTERGVM66W0kyrnptx48ahU6dOWLJkCUxMHjeRn5+PgQMHYuzYsdi7d6+kQRIREdFThrrKSSpa99xMmDBBldgAgImJCb788kscO3ZMsuCIiIjo1RMREQE3NzeYmZmhcePGOHLkSLF1lyxZgpYtW8LGxgY2NjYIDAzUWF8KWiU3VlZWSExMLFR+48YNWFpaljkoIiIiKp6RTEi2lda6deswfvx4hIaG4sSJE6hXrx6CgoJw9+7dIuvHxMSgR48e2L17N2JjY+Hi4oJ27dohKSmprB9DsbRKbrp3744BAwZg3bp1uHHjBm7cuIFffvkFAwcORI8ePaSOkYiIiJ6hyzk3c+fOxaBBg9CvXz94eXkhMjISFhYWWLZsWZH1V69ejeHDh6N+/frw9PTE0qVLoVQqER0dXcZPoXilmnNz/fp1VK9eHbNnz4ZMJkPv3r2Rn58PIQRMTU0xbNgwzJw5s7xiJSIiIonl5OQgJydHrUwul0Mulxeqm5ubi+PHjyM4OFhVZmRkhMDAQMTGxpboeg8fPkReXh5sbW3LFrgGpUpuPDw84OrqioCAAAQEBODKlStIS0tTHbOwsCiPGImIiOgZUq5yCg8PR1hYmFpZaGgoJk+eXKjuvXv3UFBQAHt7e7Vye3t7XLhwoUTXmzBhApycnBAYGKh1zC9SquTmn3/+QUxMDGJiYrB27Vrk5ubC3d0dbdq0QZs2beDv71/ohomIiEhaxhK29UVwMMaPH69WVlSvjRRmzpyJX375BTExMTAzMyuXawClTG78/f3h7+8PAHj06BEOHjyoSnZWrFiBvLw8eHp64ty5c+URKxEREUmsuCGoolSpUgXGxsa4c+eOWvmdO3fg4OCg8dzZs2dj5syZ+Pvvv+Hj46N1vCWh9VJ5MzMztGnTBhMnTkRYWBhGjx6NihUrlrhbioiIiLSjq9VSpqamaNiwodpk4CeTg5s2bVrsebNmzcLUqVOxY8cONGrUSOv7LqlSP8QvNzcXhw4dwu7duxETE4PDhw/DxcUFrVq1wsKFC/neKSIionKmyycLjx8/Hn369EGjRo3g5+eHefPmISsrC/369QMA9O7dG87OzggPDwcA/O9//0NISAjWrFkDNzc3JCcnAwAqVqyIihUrlkuMpUpu2rRpg8OHD6N69epo3bo1hgwZgjVr1sDR0bFcgiMiIqJXS/fu3ZGSkoKQkBAkJyejfv362LFjh2rObWJiIoyMng4MLVq0CLm5ufjwww/V2ilu0rIUSpXc7Nu3D46OjqrJw61bt0blypXLJTAiIiIqmq7fCTVy5EiMHDmyyGMxMTFq+wkJCeUf0HNKNecmLS0NixcvhoWFBf73v//ByckJ3t7eGDlyJDZs2ICUlJTyipOIiIj+n7FMus0QlarnRqFQoH379mjfvj0AICMjA/v378fu3bsxa9Ys9OzZEzVr1sTZs2fLJVgiIiKiF9HqreBPKBQK2NrawtbWFjY2NjAxMUF8fLxUsREREVERdD0s9aorVXKjVCpx7NgxxMTEYPfu3Thw4ACysrLg7OyMgIAAREREICAgoLxiJSIiIkCrF16+TkqV3FhbWyMrKwsODg4ICAjAd999B39/f3h4eJRXfERERESlUqrk5ttvv0VAQABq1apVXvEQERHRC3BYSrNSJTdDhgwprziIiIiohKR8t5Qh0vr1C0RERESvojKtliIiIqKXj8NSmr1SyY21aQddh0B6Ljtxra5DID3mZtlR1yEQlQhXS2n2SiU3wCVdB0B6rRYSMjbrOgjSU26WHWFerYeuwyA9xj+uXh2vWHJDREREL2Kor02QCpMbIiIiPcM5N5pxtRQREREZFPbcEBER6Rn23GjG5IaIiEjPMLnRjMNSREREZFDYc0NERKRnjPmcG42Y3BAREekZDrtoxs+HiIiIDAp7boiIiPQMJxRrxuSGiIhIzzC50YzDUkRERGRQ2HNDRESkZ7haSjMmN0RERHqGw1KacViKiIiIDAp7boiIiPQMe240Y3JDRESkZ5jcaMZhKSIiIjIo7LkhIiLSM8bsudGIyQ0REZGeMeJScI04LEVEREQGhT03REREeoY9E5oxuSEiItIzXC2lGZM/IiIiMijsuSEiItIzXC2lGZMbIiIiPcPVUpqVaVjq6tWrmDhxInr06IG7d+8CALZv345z585JEhwRERFRaWmd3OzZswfe3t44fPgwfv/9d2RmZgIATp06hdDQUMkCJCIiInVGMuk2Q6R1cvPVV19h2rRp2LVrF0xNTVXlbdq0waFDhyQJjoiIiApjcqOZ1snNmTNn8P777xcqt7Ozw71798oUFBEREZG2tE5urK2tcfv27ULlJ0+ehLOzc5mCIiIiouIZSbgZIq3v6+OPP8aECROQnJwMmUwGpVKJAwcO4PPPP0fv3r2ljJGIiIieIZNJtxkirZObGTNmwNPTEy4uLsjMzISXlxdatWqFZs2aYeLEiVLGSERERFRiWj/nxtTUFEuWLEFISAjOnDmDzMxM+Pr6ombNmlLGR0RERM8x0A4XyWid3Ozdu1fVc+Pi4qIqz8vLQ2xsLFq1aiVJgERERKTOUIeTpKL1sJS/vz/q1atXaNl3amoqAgICyhwYERERkTbKNFH6448/xttvv43ly5erlQvBx0ITERGVF66W0kzrYSmZTIbg4GC0bNkSvXv3xunTpzFnzhzVMSIiIiofMr5bSiOtk7YnvTNdu3bFvn37sGHDBnTo0AFpaWlSxUZERESvoIiICLi5ucHMzAyNGzfGkSNHNNb/9ddf4enpCTMzM3h7e2Pbtm3lGp8kPVK+vr44cuQI0tLS8Pbbb0vRJBERERVDJuFWWuvWrcP48eMRGhqKEydOoF69eggKClK9QPt5Bw8eRI8ePTBgwACcPHkSXbp0QZcuXXD27Fktrl4yWic3ffr0gbm5uWrfwcEBe/bswdtvv41q1apJEhwREREVpsuH+M2dOxeDBg1Cv3794OXlhcjISFhYWGDZsmVF1v/+++/Rvn17fPHFF3jzzTcxdepUNGjQAAsXLizjp1A8rZObn376CZaWlmplcrkcK1aswPXr18scGBEREZW/nJwcpKenq205OTlF1s3NzcXx48cRGBioKjMyMkJgYCBiY2OLPCc2NlatPgAEBQUVW18KpZpQfPr0adStWxdGRkY4ffq0xro+Pj5lCoyIiIiKJuWynfDwcISFhamVhYaGYvLkyYXq3rt3DwUFBbC3t1crt7e3x4ULF4psPzk5ucj6ycnJZQtcg1IlN/Xr10dycjLs7OxQv359yGQytWXfT/ZlMhkKCgokD5aIiIgAIwmzm+DgYIwfP16tTC6XS3cBHShVcnP9+nVUrVpV9d9ERESk3+RyeYmTmSpVqsDY2Bh37txRK79z5w4cHByKPMfBwaFU9aVQqjk3rq6uqmfYuLq6atyIiIiofOhqtZSpqSkaNmyI6OhoVZlSqUR0dDSaNm1a5DlNmzZVqw8Au3btKra+FEo9ofjSpUuF1rNHR0cjICAAfn5+mDFjhmTBERERUWG6XC01fvx4LFmyBCtWrEB8fDyGDRuGrKws9OvXDwDQu3dvBAcHq+qPGTMGO3bswJw5c3DhwgVMnjwZx44dw8iRI6X6OAop9ROKJ0yYAG9vb/j5+QF4PDzVsWNHtGzZEj4+PggPD4eFhQXGjh0rdaxERESkY927d0dKSgpCQkKQnJyM+vXrY8eOHapJw4mJiTAyetp30qxZM6xZswYTJ07E119/jZo1a2Ljxo2oW7duucUoE6V8EZSLiwvWr1+v6k6aNm0aNmzYgLi4OABAVFQUFixYoNovnUtanEP0RC0kZGzWdRCkp9wsO8K8Wg9dh0F6LDtx7Uu7VnzaFsnaetP6PcnaelWUeljq3r17eOONN1T7u3fvRseOHVX7/v7+SEhIkCQ4IiIiKkyXTyjWB6VObmxtbXH79m0AjycRHTt2DE2aNFEdz83N5VvBiYiISGdKndz4+/tj6tSpuHHjBubNmwelUgl/f3/V8fPnz8PNzU3CEImIiOhZRjLpNkNU6gnF06dPR9u2beHq6gpjY2PMnz8fCoVCdXzVqlVo06aNpEESERHRUwaak0im1MmNm5sb4uPjce7cOVStWhVOTk5qx8PCwtTm5BARERG9TKVObgDAxMQE9erVK/LY8+VWVlaIi4uDu7u7NpciIiKi58hknNuqiVbJTWlwcjEREZG0OCylWbknN1Q0IQTmz1+NX3/difT0LDRo8CYmTx4ONzenYs/58cdfsXPnQVy7lgQzM1P4+nri88/7wt396TBgSsoDzJq1DAcPxiErKxvVqztj6NBuCApq/jJui14iIQRW/vgXdvxxGJmZ2fCqVx2jv+oK52pVNZ63af0BbFgVg9T7GXCv6YjhX7wPz7rV1OqcP52A5T9sx4WziTA2NoJ7LSfMWDAYcrMK5XhH9Cpq7ueJcUPfQwNvdzja26DbwDnYvPOYrsMi0qjUq6VIGkuW/IZVq7Zg8uThWL9+NszNzTBgQAhycnKLPefIkbPo2fNdrF//LX76aSry8wswYEAIHj58pKozYcJcXL+ehEWLJmHz5oVo27YZxo6dhfPnr76M26KXaP2K3fjzl/0YFfwBvl8+GmZmpvh61BLk5uQVe07Mzjgs/m4Teg5qi4ifx8K9lhO+GbUEaakZqjrnTyfgm1FL0bBJbcxfMQbzV4xBp27NITPUZRWkkcJCjjPnEzF24jJdh0LP0OXrF/QBkxsdEEJg5cpNGDasGwIDm8DTszpmzRqHu3dT8fffh4o9LyoqDF27BqJmTVd4elbHzJljcetWCs6du6Kqc/LkBXz66Xvw8akFFxcHDB/eHVZWCrU6pP+EENi4dh96DAhEM/+6cK/phC+nfIz7Kek4GHO22PN+X70H7bs0RlAnP7i6O2B08AeQm1XAX5uOqur8OHcTunzcAt37toGbhwNc3OzQum19mJqyo/d1tDPmFMJmr8emv9hb8yoxknAzROV+XzJDTQvL4ObNO0hJeYBmzeqryiwtFahXrxZOnrxQ4nYyMrIAAJUqWarKfH09sX37PqSlZUCpVGLr1r3IycmFn5+3ZPGT7iUnpSL1fgYa+NVUlSkqmsOzbjXEn/m3yHPy8vJx+UISGjSupSozMjKCr19NnD/9+Jy01AxcOJsIa5uKGNt/Abq3m4zPB/+As3HXy/eGiIgkVO7JDScUF5aS8gAAULmytVp55crWuHfvQYnaUCqVmDFjCRo0eBO1armqyufNm4D8/AI0bvwJvL27IiQkAgsXfg1X1+Ln8pD+Sb3/eBjJurKlWrm1bUXVseelp2VBWaCEtW1FtXIbW0s8uJ8OALidlAoAWLVkJzp0aYzp8wehRm1nfDUsEkmJKVLfBhFpicNSmmmd3OzevbtE9bZv3w5nZ2e1spycHKSnp6ttOTk52obyytu0KQa+vh+ptvz8/DK3GRYWicuXE/Hdd1+qlX///Wqkp2dh+fJp+O2379CvXxeMHTsLFy8mlPmapDv/bD+Bzi2/Vm0F+QXlch2l8vEfI+90bYKgTn6o4emMoZ91xhuudmpDV0SkW3y3lGZaD6K3b98eb7zxBvr164c+ffrAxcWlyHotWrQoVBYeHo6wsDC1stDQUEye/Im24bzS2rTxQ716T4cCcnMfT/i8fz8Ndna2qvL799Pg6fni5wFNmRKJmJij+PnncDg4VFGVJybexs8/b8GWLQtRs+bj3hxPz+o4duwcVq/eiilTRkh1S/SSNWnlhdp1x6v283IfJ8hp9zNQuYqVqjwtNRMetYrupbOyVsDI2AhpqZlq5Q9SM2BT+XEblas87glyrW6vVseluh3uJpesV5GISNe07rlJSkrCyJEjsWHDBri7uyMoKAjr169Hbm7xq32eCA4Oxn///ae2BQcHaxvKK69iRQu4ujqptho1qqFqVRvExp5S1cnMfIhTpy7B19ez2HaEEJgyJRK7dsVixYrpcHFxUDuenf2498vISP3HamxsxOFBPWehMIOzSxXV5upuD9vKljh59LKqTlbmI1w4m4g3vV2LbKNCBRPU9HTGySNPz1EqlYg7egVePo/PsXeyReWqVrj5r/oQVNK/KbBztCmHOyMibXBYSjOtk5sqVapg3LhxiIuLw+HDh1GrVi0MHz4cTk5OGD16NE6dOlXsuXK5HFZWVmqbXC7XNhS9I5PJ0Lt3JyxatA7R0Ydx8WICvvxyLuzsbBEY+PQN6336fIOff96i2g8LW4RNm2IwZ87nUCjMkZLyACkpD/Do0eOkxt39Dbi6OiIkJAKnT19CYuJtLFv2Bw4ciFNrl/SfTCZDlx4tsTYqGrF7zuH6ldv4NnQtKle1QjP/uqp6E4ZF4s91+1X7XXu2xvaNh7Fry1EkXr+DBeG/41F2Ltp1fEvV7oe9/LHxl/3Y9/cpJN24hxWLduDGv3fRvrPfS79P0j2FhRw+Xq7w8XqcALu5VIWPlytcnCrrOLLXG4elNJNkbWeDBg3g4OCAypUrY+bMmVi2bBl++OEHNG3aFJGRkahTp44UlzEogwZ9gOzsRwgJWYj09Cw0bOiFpUvDIJebqurcuJGMBw/SVftr124HAPTq9bVaW+HhY9C1ayAqVDDB4sWTMWfOcgwdOhUPH2ajWjVHzJw5Fq1bN3o5N0YvTbc+AXj0KBffz9iAzIxs1KlfHdPnD4Kp/OmD9m7fvI/0tCzVvn+7+vjvQSZWRv6FB/cz4F7LCdMXDITNMxOTu37SCnm5+Yj8bhMy/nsI91pOCI8YAqc3qoBePw183LFzfYhqf1ZobwDAql/3YPBnkboKi0gjmSjDeEVeXh7+/PNPLFu2DLt27UKjRo0wYMAA9OjRAykpKZg4cSJOnDiB8+fPl7DFS9qGQgSgFhIyNus6CNJTbpYdYV6th67DID2Wnbj2pV3r1kPpftc5WXSUrK1XhdY9N6NGjcLatWshhECvXr0wa9Ys1K37tDtcoVBg9uzZhd4aTkRERGVjqMNJUtE6uTl//jwWLFiArl27FjtfpkqVKiVeMk5EREQkBa2Tm+jo6Bc3bmKC1q1ba3sJIiIiKoJMxhWwmpQqudm0aVOJ63bq1KnUwRAREdGLcVhKs1IlN126dClRPZlMhoKC8nmCKhEREZEmpUpulEplecVBREREJWSoD9+TiqG+7ZyIiIheU6XquZk/fz4GDx4MMzMzzJ8/X2Pd0aNHlykwIiIiKho7bjQr1UP8qlevjmPHjqFy5cqoXr168Y3KZLh27ZoW4fAhflQWfIgfaY8P8aOyepkP8bv/qOQLfF6kspnhLQAqVc/N9evXi/xvIiIioleFJO+WIiIiopeHE4o10zq5EUJgw4YN2L17N+7evVtoJdXvv/9e5uCIiIioKMxuNNE6uRk7dix+/PFHBAQEwN7eHjKmkURERPQK0Dq5WbVqFX7//Xe88847UsZDRERELyBjz41GWic3lSpVgru7u5SxEBERUQnIZHxMnSZafzqTJ09GWFgYsrOzpYyHiIiIqEy07rnp1q0b1q5dCzs7O7i5uaFChQpqx0+cOFHm4IiIiKgoHJbSROvkpk+fPjh+/Dg+/fRTTigmIiJ6iTjnRjOtk5utW7fir7/+QosWLaSMh4iIiKhMtE5uXFxcYGVlJWUsREREVCLsudFE6wnFc+bMwZdffomEhAQJwyEiIqIXkcmMJNsMkdY9N59++ikePnwIDw8PWFhYFJpQnJqaWubgiIiIiEpL6+Rm3rx5EoZBREREJcdhKU3KtFqKiIiIXj6ultJMkreCP3r0CLm5uWplnGxMREREuqD1TKKsrCyMHDkSdnZ2UCgUsLGxUduIiIiofMgk/D9DpHVy8+WXX+Kff/7BokWLIJfLsXTpUoSFhcHJyQkrV66UMkYiIiJSYyThZni0HpbavHkzVq5cCX9/f/Tr1w8tW7ZEjRo14OrqitWrV6Nnz55SxklERERUIlqnbKmpqaq3gltZWamWfrdo0QJ79+6VJjoiIiIqRCaTSbYZIq2TG3d3d1y/fh0A4OnpifXr1wN43KNjbW0tSXBERERUFJmEm+EpdXJz7do1KJVK9OvXD6dOnQIAfPXVV4iIiICZmRnGjRuHL774QvJAiYiIiEqi1MlNzZo1ce/ePYwbNw6jR49G9+7d4e3tjQsXLmDNmjU4efIkxowZUx6xEhEREfRjtVRqaip69uwJKysrWFtbY8CAAcjMzNRYf9SoUahduzbMzc1RrVo1jB49Gv/991+pr13q5EYIoba/bds2ZGVlwdXVFV27doWPj0+pgyAiIqLSePVXS/Xs2RPnzp3Drl27sGXLFuzduxeDBw8utv6tW7dw69YtzJ49G2fPnsXy5cuxY8cODBgwoNTXluQhfkRERERPxMfHY8eOHTh69CgaNWoEAFiwYAHeeecdzJ49G05OToXOqVu3Ln777TfVvoeHB6ZPn45PP/0U+fn5MDEpecpS6pStqNnVhjrbmoiI6FUk5bBUTk4O0tPT1bacnJwyxRcbGwtra2tVYgMAgYGBMDIywuHDh0vczn///QcrK6tSJTaAFj03Qgj07dsXcrkcwONXLwwdOhQKhUKt3u+//17apomIiKgEpOxUCA8PR1hYmFpZaGgoJk+erHWbycnJsLOzUyszMTGBra0tkpOTS9TGvXv3MHXqVI1DWcUpdXLz/AszP/3001JflIiIiF4NwcHBGD9+vFrZkw6M53311Vf43//+p7G9+Pj4MseUnp6Od999F15eXlolWaVObn766adSX4SIiIikJF3PjVwuLzaZed5nn32Gvn37aqzj7u4OBwcH3L17V608Pz8fqampcHBw0Hh+RkYG2rdvD0tLS/zxxx+oUKFCiWJ7FicUExER6RmZjt4JVbVqVVStWvWF9Zo2bYq0tDQcP34cDRs2BAD8888/UCqVaNy4cbHnpaenIygoCHK5HJs2bYKZmZlWcRrmG7OIiIhIZ9588020b98egwYNwpEjR3DgwAGMHDkSH3/8sWqlVFJSEjw9PXHkyBEAjxObdu3aISsrC1FRUUhPT0dycjKSk5NRUFBQquuz54aIiEjvvPqrlFevXo2RI0fi7bffhpGRET744APMnz9fdTwvLw8XL17Ew4cPAQAnTpxQraSqUaOGWlvXr1+Hm5tbia/N5IaIiEjP6MMjWGxtbbFmzZpij7u5uak9GNjf37/Qg4K1xWEpIiIiMijsuSEiItI7r37PjS4xuSEiItIzulotpS/46RAREZFBYc8NERGR3uGwlCZMboiIiPSMjMmNRhyWIiIiIoPCnhsiIiI9ow/PudElJjdERER6hwMvmvDTISIiIoPCnhsiIiI9wwnFmjG5ISIi0jtMbjThsBQREREZFPbcEBER6RmultKMyQ0REZHe4cCLJvx0iIiIyKCw54aIiEjPcLWUZjIhhNB1EKRZTk4OwsPDERwcDLlcrutwSA/xO0Rlxe8Q6RMmN3ogPT0dlSpVwn///QcrKytdh0N6iN8hKit+h0ifcM4NERERGRQmN0RERGRQmNwQERGRQWFyowfkcjlCQ0M5iY+0xu8QlRW/Q6RPOKGYiIiIDAp7boiIiMigMLkhIiIig8LkhoiIiAwKkxuil2Dy5MmoX7++rsMoFzExMZDJZEhLS9N1KKQFNzc3zJs3T7Uvk8mwcePGl3LthIQEyGQyxMXFvZTr0euDyY2OxcbGwtjYGO+++66uQ3ntREZGwtLSEvn5+aqyzMxMVKhQAf7+/mp1n/wDfvXq1Zcc5dNrF7UlJye/9Hie16xZM9y+fRuVKlXSdSgGoW/fvkX+rK9cuVIu1zt69CgGDx5cLm1fv34dn3zyCZycnGBmZoY33ngDnTt3xoULFwAALi4uuH37NurWrVsu16fXF1+cqWNRUVEYNWoUoqKicOvWLTg5Oek6pNdGQEAAMjMzcezYMTRp0gQAsG/fPjg4OODw4cN49OgRzMzMAAC7d+9GtWrV4OHhUaprCCFQUFAgSbwXL14s9Nh7Ozs7SdrWVl5eHkxNTeHg4KDTOAxN+/bt8dNPP6mVVa1atVyuVV7t5uXloW3btqhduzZ+//13ODo64ubNm9i+fbuql8/Y2JjfHSoX7LnRoczMTKxbtw7Dhg3Du+++i+XLl6sd37RpE2rWrAkzMzMEBARgxYoVhbr/9+/fj5YtW8Lc3BwuLi4YPXo0srKyXu6N6KnatWvD0dERMTExqrKYmBh07twZ1atXx6FDh9TKAwICkJOTg9GjR8POzg5mZmZo0aIFjh49qlZPJpNh+/btaNiwIeRyOfbv31/o2levXoW7uztGjhyJkj6Nwc7ODg4ODmqbkZERHj16hDp16qj99X316lVYWlpi2bJlAIDly5fD2toaGzduVH2ngoKCcOPGDbVr/Pnnn2jQoAHMzMzg7u6OsLAwtZ4tmUyGRYsWoVOnTlAoFJg+fXqRw1Iv+l66ublhxowZ6N+/PywtLVGtWjUsXrxYLZabN2+iR48esLW1hUKhQKNGjXD48OESx6rP5HJ5oZ/1999/D29vbygUCri4uGD48OHIzMxUnfPkZ7xlyxbUrl0bFhYW+PDDD/Hw4UOsWLECbm5usLGxwejRo9US7ueHpZ7Vpk0bjBw5Uq0sJSUFpqamiI6O1ngP586dw9WrV/HDDz+gSZMmcHV1RfPmzTFt2jTVHxPPD0sV12v15H+jOTk5+Pzzz+Hs7AyFQoHGjRur/e+XSEWQzkRFRYlGjRoJIYTYvHmz8PDwEEqlUgghxLVr10SFChXE559/Li5cuCDWrl0rnJ2dBQDx4MEDIYQQV65cEQqFQnz33Xfi0qVL4sCBA8LX11f07dtXV7ekdz755BPRrl071f5bb70lfv31VzF06FAREhIihBDi4cOHQi6Xi+XLl4vRo0cLJycnsW3bNnHu3DnRp08fYWNjI+7fvy+EEGL37t0CgPDx8RE7d+4UV65cEffv3xehoaGiXr16QgghTp06JRwcHMQ333xTohiftPnk516UkydPClNTU7Fx40aRn58vmjRpIt5//33V8Z9++klUqFBBNGrUSBw8eFAcO3ZM+Pn5iWbNmqnq7N27V1hZWYnly5eLq1evip07dwo3NzcxefJkVR0Aws7OTixbtkxcvXpV/Pvvv4XiK8n30tXVVdja2oqIiAhx+fJlER4eLoyMjMSFCxeEEEJkZGQId3d30bJlS7Fv3z5x+fJlsW7dOnHw4MESx6qv+vTpIzp37lyo/LvvvhP//POPuH79uoiOjha1a9cWw4YNUx1/8jNu27atOHHihNizZ4+oXLmyaNeunejWrZs4d+6c2Lx5szA1NRW//PKL6jxXV1fx3XffqfYBiD/++EMIIcTq1auFjY2NePToker43LlzhZubm+p3VXFu3rwpjIyMxOzZs0V+fn6Rda5fvy4AiJMnTwohhEhLSxO3b99WbWPGjBF2dnbi9u3bQgghBg4cKJo1ayb27t0rrly5Ir799lshl8vFpUuXNMZCrx8mNzrUrFkzMW/ePCGEEHl5eaJKlSpi9+7dQgghJkyYIOrWratW/5tvvlH7R2TAgAFi8ODBanX27dsnjIyMRHZ2drnHbwiWLFkiFAqFyMvLE+np6cLExETcvXtXrFmzRrRq1UoIIUR0dLQAIBISEkSFChXE6tWrVefn5uYKJycnMWvWLCHE00Rk48aNatd5ktwcOHBA2NjYiNmzZ5c4xidtKhQKtc3Ly0ut3qxZs0SVKlXEyJEjhaOjo7h3757q2E8//SQAiEOHDqnK4uPjBQBx+PBhIYQQb7/9tpgxY4Zam6tWrRKOjo6qfQBi7NixRcZXmu+lq6ur+PTTT1XHlUqlsLOzE4sWLRJCCPHjjz8KS0tLVdL4vJLEqq/69OkjjI2N1X7WH374YaF6v/76q6hcubJq/8nP+MqVK6qyIUOGCAsLC5GRkaEqCwoKEkOGDFHta0pusrOzhY2NjVi3bp3quI+PT4mTyIULFwoLCwthaWkpAgICxJQpU8TVq1dVx59Pbp7122+/CTMzM7F//34hhBD//vuvMDY2FklJSWr13n77bREcHFyieOj1wTk3OnLx4kUcOXIEf/zxBwDAxMQE3bt3R1RUFPz9/XHx4kW89dZbauf4+fmp7Z86dQqnT5/G6tWrVWVCCCiVSly/fh1vvvlm+d+InvP390dWVhaOHj2KBw8eoFatWqhatSpat26Nfv364dGjR4iJiYG7uzv+++8/5OXloXnz5qrzK1SoAD8/P8THx6u126hRo0LXSkxMRNu2bTF9+nSMHTu21LHu27cPlpaWatd+1meffYaNGzdi4cKF2L59OypXrqx23MTERO075enpCWtra8THx8PPzw+nTp3CgQMHMH36dFWdgoICPHr0CA8fPoSFhUWx9/askn4vfXx8VMdlMhkcHBxw9+5dAEBcXBx8fX1ha2tb7DVKEqu+CggIwKJFi1T7CoUCf//9N8LDw3HhwgWkp6cjPz+/0P1aWFiozQuzt7eHm5sbKlasqFb25HN+ETMzM/Tq1QvLli1Dt27dcOLECZw9exabNm0q0fkjRoxA7969ERMTg0OHDuHXX3/FjBkzsGnTJrRt27bY806ePIlevXph4cKFqv+9nTlzBgUFBahVq5Za3ZycnELfdSImNzoSFRWF/Px8tQnEQgjI5XIsXLiwRG1kZmZiyJAhGD16dKFj1apVkyxWQ1ajRg288cYb2L17Nx48eIDWrVsDAJycnODi4oKDBw9i9+7daNOmTanaVSgUhcqqVq0KJycnrF27Fv379y80OfhFqlevDmtr62KP3717F5cuXYKxsTEuX76M9u3bl6r9zMxMhIWFoWvXroWOPZlYDRR9b8+3U5Lv5fPJmUwmg1KpBACYm5tLEqu+UigUqFGjhmo/ISEB7733HoYNG4bp06fD1tYW+/fvx4ABA5Cbm6tKbor6TDV9ziUxcOBA1K9fHzdv3sRPP/2ENm3awNXVtcTnW1paomPHjujYsSOmTZuGoKAgTJs2rdjkJjk5GZ06dcLAgQMxYMAAVXlmZiaMjY1x/PhxGBsbq53zbPJGBDC50Yn8/HysXLkSc+bMQbt27dSOdenSBWvXrkXt2rWxbds2tWPPTlwFgAYNGuD8+fNqvwSp9AICAhATE4MHDx7giy++UJW3atUK27dvx5EjRzBs2DB4eHjA1NQUBw4cUP1yz8vLw9GjR0vUE2Nubo4tW7bgnXfeQVBQEHbu3KnWE1NW/fv3h7e3NwYMGIBBgwYhMDBQrfcuPz8fx44dU/UAXrx4EWlpaao6DRo0wMWLF8v8fZLie+nj44OlS5ciNTW1yN4bqWLVF8ePH4dSqcScOXNgZPR4Hcj69etfyrW9vb3RqFEjLFmyBGvWrCnxH19Fkclk8PT0xMGDB4s8/ujRI3Tu3Bmenp6YO3eu2jFfX18UFBTg7t27aNmypdYx0OuByY0ObNmyBQ8ePMCAAQMKPRvkgw8+QFRUFNavX4+5c+diwoQJGDBgAOLi4lSrqWQyGQBgwoQJaNKkCUaOHImBAwdCoVDg/Pnz2LVrV5l+Ab1uAgICMGLECOTl5al6bgCgdevWGDlyJHJzcxEQEACFQoFhw4bhiy++gK2tLapVq4ZZs2bh4cOHan9haqJQKLB161Z06NABHTp0wI4dO0r8V+fdu3fx6NEjtbLKlSujQoUKiIiIQGxsLE6fPg0XFxds3boVPXv2xKFDh2Bqagrg8V/1o0aNwvz582FiYoKRI0eiSZMmqmQnJCQE7733HqpVq4YPP/wQRkZGOHXqFM6ePYtp06aVKEZAmu9ljx49MGPGDHTp0gXh4eFwdHTEyZMn4eTkhKZNm0oWq76oUaMG8vLysGDBAnTs2BEHDhxAZGTkS7v+wIEDMXLkSCgUCrz//vslOicuLg6hoaHo1asXvLy8YGpqij179mDZsmWYMGFCkecMGTIEN27cQHR0NFJSUlTltra2qFWrFnr27InevXtjzpw58PX1RUpKCqKjo+Hj48NnhZE6Hc/5eS2999574p133iny2OHDhwUAcerUKfHnn3+KGjVqCLlcLvz9/cWiRYsEALXJwkeOHBFt27YVFStWFAqFQvj4+Ijp06e/rFsxCE8mNXp6eqqVJyQkCACidu3aqrLs7GwxatQoUaVKFSGXy0Xz5s3FkSNHVMeLW9n07GopIR6vBmrWrJlo1aqVyMzM1BjfkzaL2mJjY0V8fLwwNzcXa9asUZ3z4MED4eLiIr788kshxOPJppUqVRK//fabcHd3F3K5XAQGBop///1X7Vo7duwQzZo1E+bm5sLKykr4+fmJxYsXq47jmcmmmu75Rd/L5yexCiFEvXr1RGhoqGo/ISFBfPDBB8LKykpYWFiIRo0aqSY/lyRWfVXcaqm5c+cKR0dHYW5uLoKCgsTKlSvVPvcnP+NnPf+9K6p9TROKn8jIyBAWFhZi+PDhJb6PlJQUMXr0aFG3bl1RsWJFYWlpKby9vcXs2bNFQUGBEKLwhGJXV9civ+dPFlrk5uaKkJAQ4ebmJipUqCAcHR3F+++/L06fPl3iuOj1IBOihA/ZIJ2bPn06IiMjCz2bhOhFli9fjrFjx/IVCaSVhIQEeHh44OjRo2jQoIGuwyF6IQ5LvcJ++OEHvPXWW6hcuTIOHDiAb7/9ttADtYiIykteXh7u37+PiRMnokmTJkxsSG/wCcWvsMuXL6Nz587w8vLC1KlT8dlnn2Hy5Mm6Dosk1qFDB1SsWLHIbcaMGboOj15jBw4cgKOjI44ePVpojs++ffuK/d5y9RLpGoeliHQsKSkJ2dnZRR6ztbUt9lkvRLqUnZ2NpKSkYo+/LivZ6NXE5IaIiIgMCoeliIiIyKAwuSEiIiKDwuSGiIiIDAqTGyIiIjIoTG6IiIjIoDC5ISIiIoPC5IaIiIgMCpMbIiIiMij/B1hqdcOpfAIVAAAAAElFTkSuQmCC\n"
},
"metadata": {}
}
]
},
{
"cell_type": "markdown",
"source": [
"**Model Building**"
],
"metadata": {
"id": "KdhBi8paSfyb"
}
},
{
"cell_type": "code",
"source": [
"# train a Gaussian Naive Bayes classifier on the training set\n",
"from sklearn.naive_bayes import GaussianNB\n",
"\n",
"# instantiate the model\n",
"gnb1 = GaussianNB()\n",
"\n",
"# Train model\n",
"model_nb1 = gnb1.fit(trainX, train_y)\n",
"\n",
"# Predicting the classes\n",
"yhat3 = gnb1.predict(trainX)\n",
"\n",
"from sklearn.metrics import confusion_matrix\n",
"cm3 = confusion_matrix(train_y.values, yhat3, labels=[0,1,2,3])\n",
"print('\\n\\n-------The confusion matrix for this model is-------')\n",
"print(cm3)\n",
"\n",
"from sklearn.metrics import classification_report\n",
"print('\\n\\n-------Printing the whole report of the model-------')\n",
"print(classification_report(train_y.values, yhat3))\n"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "9HTy1Bq7SrqV",
"outputId": "2ff1b61a-d9f8-43b3-a206-76b3529079ac"
},
"execution_count": 32,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"\n",
"\n",
"-------The confusion matrix for this model is-------\n",
"[[ 534 233 464 347]\n",
" [ 288 310 710 178]\n",
" [ 110 164 1088 214]\n",
" [ 310 137 135 1232]]\n",
"\n",
"\n",
"-------Printing the whole report of the model-------\n",
" precision recall f1-score support\n",
"\n",
" 0 0.43 0.34 0.38 1578\n",
" 1 0.37 0.21 0.27 1486\n",
" 2 0.45 0.69 0.55 1576\n",
" 3 0.63 0.68 0.65 1814\n",
"\n",
" accuracy 0.49 6454\n",
" macro avg 0.47 0.48 0.46 6454\n",
"weighted avg 0.48 0.49 0.47 6454\n",
"\n"
]
}
]
},
{
"cell_type": "code",
"source": [
"test_nb1_x = testX.copy()\n",
"test_nb1_x.head()"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 255
},
"id": "D8jF3RmqSxHZ",
"outputId": "9dc4ad2e-ddd3-495a-e76a-2978a3792f27"
},
"execution_count": 33,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
" Age Work_Experience Family_Size Gender_Female Gender_Male \\\n",
"4463 18 1.0 3.0 True False \n",
"1687 38 0.0 3.0 False True \n",
"5694 43 0.0 2.0 True False \n",
"7390 63 0.0 2.0 True False \n",
"1347 35 1.0 1.0 True False \n",
"\n",
" Ever_Married_No Ever_Married_Yes Graduated_No Graduated_Yes \\\n",
"4463 True False True False \n",
"1687 False True True False \n",
"5694 True False True False \n",
"7390 False True True False \n",
"1347 True False False True \n",
"\n",
" Profession_Artist ... Spending_Score_Average Spending_Score_High \\\n",
"4463 False ... False False \n",
"1687 False ... False True \n",
"5694 False ... False False \n",
"7390 True ... False True \n",
"1347 False ... False False \n",
"\n",
" Spending_Score_Low Var_1_Cat_1 Var_1_Cat_2 Var_1_Cat_3 Var_1_Cat_4 \\\n",
"4463 True False False True False \n",
"1687 False False False False False \n",
"5694 True False False False True \n",
"7390 False False False False False \n",
"1347 True False False False True \n",
"\n",
" Var_1_Cat_5 Var_1_Cat_6 Var_1_Cat_7 \n",
"4463 False False False \n",
"1687 False True False \n",
"5694 False False False \n",
"7390 False True False \n",
"1347 False False False \n",
"\n",
"[5 rows x 28 columns]"
],
"text/html": [
"\n",
" \n",
"
\n",
"\n",
"
\n",
" \n",
" \n",
" \n",
" Age \n",
" Work_Experience \n",
" Family_Size \n",
" Gender_Female \n",
" Gender_Male \n",
" Ever_Married_No \n",
" Ever_Married_Yes \n",
" Graduated_No \n",
" Graduated_Yes \n",
" Profession_Artist \n",
" ... \n",
" Spending_Score_Average \n",
" Spending_Score_High \n",
" Spending_Score_Low \n",
" Var_1_Cat_1 \n",
" Var_1_Cat_2 \n",
" Var_1_Cat_3 \n",
" Var_1_Cat_4 \n",
" Var_1_Cat_5 \n",
" Var_1_Cat_6 \n",
" Var_1_Cat_7 \n",
" \n",
" \n",
" \n",
" \n",
" 4463 \n",
" 18 \n",
" 1.0 \n",
" 3.0 \n",
" True \n",
" False \n",
" True \n",
" False \n",
" True \n",
" False \n",
" False \n",
" ... \n",
" False \n",
" False \n",
" True \n",
" False \n",
" False \n",
" True \n",
" False \n",
" False \n",
" False \n",
" False \n",
" \n",
" \n",
" 1687 \n",
" 38 \n",
" 0.0 \n",
" 3.0 \n",
" False \n",
" True \n",
" False \n",
" True \n",
" True \n",
" False \n",
" False \n",
" ... \n",
" False \n",
" True \n",
" False \n",
" False \n",
" False \n",
" False \n",
" False \n",
" False \n",
" True \n",
" False \n",
" \n",
" \n",
" 5694 \n",
" 43 \n",
" 0.0 \n",
" 2.0 \n",
" True \n",
" False \n",
" True \n",
" False \n",
" True \n",
" False \n",
" False \n",
" ... \n",
" False \n",
" False \n",
" True \n",
" False \n",
" False \n",
" False \n",
" True \n",
" False \n",
" False \n",
" False \n",
" \n",
" \n",
" 7390 \n",
" 63 \n",
" 0.0 \n",
" 2.0 \n",
" True \n",
" False \n",
" False \n",
" True \n",
" True \n",
" False \n",
" True \n",
" ... \n",
" False \n",
" True \n",
" False \n",
" False \n",
" False \n",
" False \n",
" False \n",
" False \n",
" True \n",
" False \n",
" \n",
" \n",
" 1347 \n",
" 35 \n",
" 1.0 \n",
" 1.0 \n",
" True \n",
" False \n",
" True \n",
" False \n",
" False \n",
" True \n",
" False \n",
" ... \n",
" False \n",
" False \n",
" True \n",
" False \n",
" False \n",
" False \n",
" True \n",
" False \n",
" False \n",
" False \n",
" \n",
" \n",
"
\n",
"
5 rows × 28 columns
\n",
"
\n",
"
\n",
"
\n"
],
"application/vnd.google.colaboratory.intrinsic+json": {
"type": "dataframe",
"variable_name": "test_nb1_x"
}
},
"metadata": {},
"execution_count": 33
}
]
},
{
"cell_type": "code",
"source": [
"test_nb1_y = test_y.copy()\n",
"test_nb1_y.head()"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 241
},
"id": "q5lF1dYuS0aW",
"outputId": "05c075d1-04b2-4d10-e76f-3a909511f53e"
},
"execution_count": 34,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"4463 3\n",
"1687 2\n",
"5694 3\n",
"7390 1\n",
"1347 3\n",
"Name: Segmentation, dtype: int64"
],
"text/html": [
"\n",
"\n",
"
\n",
" \n",
" \n",
" \n",
" Segmentation \n",
" \n",
" \n",
" \n",
" \n",
" 4463 \n",
" 3 \n",
" \n",
" \n",
" 1687 \n",
" 2 \n",
" \n",
" \n",
" 5694 \n",
" 3 \n",
" \n",
" \n",
" 7390 \n",
" 1 \n",
" \n",
" \n",
" 1347 \n",
" 3 \n",
" \n",
" \n",
"
\n",
"
dtype: int64 "
]
},
"metadata": {},
"execution_count": 34
}
]
},
{
"cell_type": "code",
"source": [
"# apply gnb1 prediction on test_nb1_x\n",
"\n",
"y_nb1 = gnb1.predict(test_nb1_x)\n",
"y_nb1\n"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "Zg2xzA1rUVJO",
"outputId": "cc2d0a7a-4321-4fdf-9ebe-a9377b342200"
},
"execution_count": 35,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"array([3, 1, 3, ..., 1, 1, 0])"
]
},
"metadata": {},
"execution_count": 35
}
]
},
{
"cell_type": "code",
"source": [
"pd.Series(y_nb1).value_counts()"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 209
},
"id": "IiQspLmxYAvj",
"outputId": "ff970bbc-eb75-4480-c6c7-5c11abc6443b"
},
"execution_count": 36,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"2 569\n",
"3 517\n",
"0 321\n",
"1 207\n",
"Name: count, dtype: int64"
],
"text/html": [
"\n",
"\n",
"
\n",
" \n",
" \n",
" \n",
" count \n",
" \n",
" \n",
" \n",
" \n",
" 2 \n",
" 569 \n",
" \n",
" \n",
" 3 \n",
" 517 \n",
" \n",
" \n",
" 0 \n",
" 321 \n",
" \n",
" \n",
" 1 \n",
" 207 \n",
" \n",
" \n",
"
\n",
"
dtype: int64 "
]
},
"metadata": {},
"execution_count": 36
}
]
},
{
"cell_type": "code",
"source": [
"from sklearn.metrics import confusion_matrix\n",
"print('-------The confusion matrix for test data is-------\\n')\n",
"print(confusion_matrix(test_nb1_y.values, y_nb1, labels=[0,1,2,3]))\n",
"\n",
"from sklearn.metrics import classification_report\n",
"print('\\n\\n-------Printing the report of test data-------\\n')\n",
"print(classification_report(test_nb1_y.values, y_nb1))"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "5HuB7o5pYON2",
"outputId": "e838f65c-1d28-4b6b-cf6e-b5ebd93cf27e"
},
"execution_count": 38,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"-------The confusion matrix for test data is-------\n",
"\n",
"[[147 54 113 80]\n",
" [ 78 72 167 55]\n",
" [ 28 44 253 69]\n",
" [ 68 37 36 313]]\n",
"\n",
"\n",
"-------Printing the report of test data-------\n",
"\n",
" precision recall f1-score support\n",
"\n",
" 0 0.46 0.37 0.41 394\n",
" 1 0.35 0.19 0.25 372\n",
" 2 0.44 0.64 0.53 394\n",
" 3 0.61 0.69 0.64 454\n",
"\n",
" accuracy 0.49 1614\n",
" macro avg 0.46 0.47 0.46 1614\n",
"weighted avg 0.47 0.49 0.47 1614\n",
"\n"
]
}
]
},
{
"cell_type": "markdown",
"source": [
"**Model Evaluation**"
],
"metadata": {
"id": "P_6v8yfjYZlL"
}
},
{
"cell_type": "code",
"source": [
"print('************************ MODEL-1 REPORT *********************************\\n')\n",
"print('Train data')\n",
"print(classification_report(train_y.values, yhat3))\n",
"print('\\nTest data')\n",
"print(classification_report(test_nb1_y.values, y_nb1))"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "G2WCPEtTYc5_",
"outputId": "86760a4f-04a8-4ad8-83ed-df6c0e6dcecd"
},
"execution_count": 40,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"************************ MODEL-1 REPORT *********************************\n",
"\n",
"Train data\n",
" precision recall f1-score support\n",
"\n",
" 0 0.43 0.34 0.38 1578\n",
" 1 0.37 0.21 0.27 1486\n",
" 2 0.45 0.69 0.55 1576\n",
" 3 0.63 0.68 0.65 1814\n",
"\n",
" accuracy 0.49 6454\n",
" macro avg 0.47 0.48 0.46 6454\n",
"weighted avg 0.48 0.49 0.47 6454\n",
"\n",
"\n",
"Test data\n",
" precision recall f1-score support\n",
"\n",
" 0 0.46 0.37 0.41 394\n",
" 1 0.35 0.19 0.25 372\n",
" 2 0.44 0.64 0.53 394\n",
" 3 0.61 0.69 0.64 454\n",
"\n",
" accuracy 0.49 1614\n",
" macro avg 0.46 0.47 0.46 1614\n",
"weighted avg 0.47 0.49 0.47 1614\n",
"\n"
]
}
]
},
{
"cell_type": "markdown",
"source": [
"**Analysis of the Results**\n",
"The confusion matrix shown in the image provides key insights into the performance of the GNB model:\n",
"\n",
"For Train Data:\n",
"The F1 scores indicate that Segment D is classified the best, while Segment B performs the worst with a recall of 0.21.\n",
"\n",
"Fo Test Data:\n",
"Accuracy: 0.49 (same as the train data, which shows the model is not overfitting or underfitting drastically)\n",
"\n",
"Key Takeaways:\n",
"\n",
"--Segment D: The model performs best on this segment, achieving high precision, recall, and F1-scores in both the train and test sets.\n",
"\n",
"--Segment B: This segment is the weakest, with very low recall and F1-score, which means the model is struggling to correctly identify and predict this group.\n",
"\n",
"--Overall Accuracy: With an accuracy of 49%, the model isn't performing exceptionally well but is providing useful predictions, especially for Segment D and C.\n"
],
"metadata": {
"id": "2NyC4ja1Z02n"
}
},
{
"cell_type": "markdown",
"source": [
"### In-Class Activity 3: Predict Class Probabilities with Gaussian Naive Bayes\n",
"- Objective: Train a Gaussian Naive Bayes model and predict the probability of each class for a few instances.\n",
"\n",
"#### Steps for the Activity:\n",
"`Train the Model:`\n",
"- Train the Gaussian Naive Bayes model on the training data.`\n",
"\n",
"`Predict Class Probabilities:`\n",
"- Use the trained model to predict class probabilities for a few instances from the test set.\n",
"\n",
"`Interpret the Probabilities:`\n",
"- Print the predicted probabilities and discuss how confident the model is for each class.\n",
"\n",
"`Hint:` mean (`theta_`) and variance (`var_`) can be extracted from the model object."
],
"metadata": {
"id": "0bhLnhf9hgx8"
}
},
{
"cell_type": "code",
"source": [
"## Solution:\n",
"\n",
"import numpy as np\n",
"from sklearn.naive_bayes import GaussianNB\n",
"\n",
"# Step 1: Train the Gaussian Naive Bayes model\n",
"gnb = GaussianNB()\n",
"gnb.fit(trainX, train_y)\n",
"\n",
"# Step 2: Extract the mean and variance of each feature learned by the model\n",
"means = gnb.theta_ # Mean of each feature per class\n",
"variances = gnb.var_ # Variance of each feature per class\n",
"\n",
"# Step 3: Display the means and variances for analysis\n",
"print(\"Feature Means per Class:\")\n",
"print(means)\n",
"\n",
"print(\"\\nFeature Variances per Class:\")\n",
"print(variances)\n",
"\n",
"# Step 4: Brief Interpretation\n",
"# Learners should compare how much each feature varies across the classes to identify important features.\n"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "y46xEZsujg5z",
"outputId": "5a6bd6e3-9d28-4994-bece-aa4145d30328"
},
"execution_count": 43,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Feature Means per Class:\n",
"[[4.50114068e+01 2.69645120e+00 2.45944233e+00 4.59442332e-01\n",
" 5.40557668e-01 4.03675539e-01 5.96324461e-01 3.75792142e-01\n",
" 6.24207858e-01 3.00380228e-01 1.02661597e-01 1.26742712e-01\n",
" 1.84410646e-01 6.08365019e-02 5.64005070e-02 3.67553866e-02\n",
" 1.00760456e-01 3.10519645e-02 1.75538657e-01 1.41318124e-01\n",
" 6.83143219e-01 1.58428390e-02 4.24588086e-02 1.11533587e-01\n",
" 1.68567807e-01 1.07731305e-02 6.25475285e-01 2.53485425e-02]\n",
" [4.83371467e+01 2.20390310e+00 2.69986541e+00 4.48855989e-01\n",
" 5.51144011e-01 2.52355316e-01 7.47644684e-01 2.65814266e-01\n",
" 7.34185734e-01 4.20592194e-01 7.40242261e-02 1.00942127e-01\n",
" 1.16419919e-01 1.03633917e-01 5.31628533e-02 2.96096904e-02\n",
" 8.61372813e-02 1.54777927e-02 3.18304172e-01 2.07940781e-01\n",
" 4.73755047e-01 1.48048452e-02 5.58546433e-02 9.82503365e-02\n",
" 1.17092867e-01 1.00942127e-02 6.79676985e-01 2.42261104e-02]\n",
" [4.90697970e+01 2.14784264e+00 2.97144670e+00 4.62563452e-01\n",
" 5.37436548e-01 1.97969543e-01 8.02030457e-01 1.69416244e-01\n",
" 8.30583756e-01 5.57106599e-01 7.29695431e-02 3.17258883e-02\n",
" 8.05837563e-02 8.81979695e-02 7.17005076e-02 1.39593909e-02\n",
" 6.72588832e-02 1.64974619e-02 4.66370558e-01 1.98604061e-01\n",
" 3.35025381e-01 1.58629442e-02 4.75888325e-02 7.80456853e-02\n",
" 5.20304569e-02 1.01522843e-02 7.71573604e-01 2.47461929e-02]\n",
" [3.32260198e+01 2.73263506e+00 3.19790518e+00 4.29437707e-01\n",
" 5.70562293e-01 7.10033076e-01 2.89966924e-01 6.35611907e-01\n",
" 3.64388093e-01 8.54465270e-02 8.82028666e-02 7.66262404e-02\n",
" 9.70231533e-02 4.90628445e-02 4.33296582e-01 4.24476295e-02\n",
" 5.34729879e-02 7.44211687e-02 6.00882029e-02 6.89084895e-02\n",
" 8.71003308e-01 2.20507166e-02 6.17420066e-02 1.09151047e-01\n",
" 1.83020948e-01 1.21278942e-02 5.86549063e-01 2.53583241e-02]]\n",
"\n",
"Feature Variances per Class:\n",
"[[2.70632316e+02 1.17399243e+01 2.10640352e+00 2.48355353e-01\n",
" 2.48355353e-01 2.40721876e-01 2.40721876e-01 2.34572686e-01\n",
" 2.34572686e-01 2.10152225e-01 9.21224714e-02 1.10679275e-01\n",
" 1.50403638e-01 5.71356999e-02 5.32197677e-02 3.54047060e-02\n",
" 9.06080646e-02 3.00880179e-02 1.44725115e-01 1.21347590e-01\n",
" 2.16458839e-01 1.55921214e-02 4.06563361e-02 9.90941238e-02\n",
" 1.40152980e-01 1.06573481e-02 2.34256231e-01 2.47062718e-02]\n",
" [2.19846628e+02 9.33325557e+00 1.93952920e+00 2.47384568e-01\n",
" 2.47384568e-01 1.88672389e-01 1.88672389e-01 1.95157320e-01\n",
" 1.95157320e-01 2.43694678e-01 6.85449180e-02 9.07530915e-02\n",
" 1.02866600e-01 9.28942058e-02 5.03368422e-02 2.87332346e-02\n",
" 7.87179280e-02 1.52385086e-02 2.16986904e-01 1.64701690e-01\n",
" 2.49311480e-01 1.45859397e-02 5.27351801e-02 8.85974858e-02\n",
" 1.03382405e-01 9.99259744e-03 2.17716459e-01 2.36394839e-02]\n",
" [2.07552235e+02 8.68182303e+00 1.84372813e+00 2.48598783e-01\n",
" 2.48598783e-01 1.58777881e-01 1.58777881e-01 1.40714658e-01\n",
" 1.40714658e-01 2.46739114e-01 6.76452668e-02 3.07196343e-02\n",
" 7.40902925e-02 8.04193656e-02 6.65598227e-02 1.37648042e-02\n",
" 6.27354038e-02 1.62255736e-02 2.48869339e-01 1.59160766e-01\n",
" 2.22783653e-01 1.56115891e-02 4.53244134e-02 7.19548342e-02\n",
" 4.93235663e-02 1.00494933e-02 1.76248055e-01 2.41340968e-02]\n",
" [2.39320470e+02 1.17372263e+01 2.66811055e+00 2.45021241e-01\n",
" 2.45021241e-01 2.05886385e-01 2.05886385e-01 2.31609688e-01\n",
" 2.31609688e-01 7.81456960e-02 8.04233988e-02 7.07549376e-02\n",
" 8.76099389e-02 4.66559597e-02 2.45550932e-01 4.06461062e-02\n",
" 5.06139054e-02 6.88829363e-02 5.64778887e-02 6.41603875e-02\n",
" 1.12356824e-01 2.15647605e-02 5.79302092e-02 9.72373742e-02\n",
" 1.49524559e-01 1.19810863e-02 2.42509538e-01 2.47155575e-02]]\n"
]
}
]
},
{
"cell_type": "markdown",
"source": [
"# **Logistic Regression**"
],
"metadata": {
"id": "AH4oN6HEQPYG"
}
},
{
"cell_type": "code",
"source": [
"import pandas as pd\n",
"\n",
"# Load the uploaded CSV files\n",
"train_file_path = 'Train.csv'\n",
"test_file_path = 'Test.csv'\n",
"\n",
"# Reading the train and test datasets\n",
"train_data = pd.read_csv(train_file_path)\n",
"test_data = pd.read_csv(test_file_path)\n",
"\n",
"# Display the first few rows of the datasets to understand their structure\n",
"train_data.head(), test_data.head()"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "o4-9p1ClkfUD",
"outputId": "fd23d9ba-42c7-4b56-fc38-6bf029b4700c"
},
"execution_count": null,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"( ID Gender Ever_Married Age Graduated Profession Work_Experience \\\n",
" 0 462809 Male No 22 No Healthcare 1.0 \n",
" 1 462643 Female Yes 38 Yes Engineer NaN \n",
" 2 466315 Female Yes 67 Yes Engineer 1.0 \n",
" 3 461735 Male Yes 67 Yes Lawyer 0.0 \n",
" 4 462669 Female Yes 40 Yes Entertainment NaN \n",
" \n",
" Spending_Score Family_Size Var_1 Segmentation \n",
" 0 Low 4.0 Cat_4 D \n",
" 1 Average 3.0 Cat_4 A \n",
" 2 Low 1.0 Cat_6 B \n",
" 3 High 2.0 Cat_6 B \n",
" 4 High 6.0 Cat_6 A ,\n",
" ID Gender Ever_Married Age Graduated Profession Work_Experience \\\n",
" 0 458989 Female Yes 36 Yes Engineer 0.0 \n",
" 1 458994 Male Yes 37 Yes Healthcare 8.0 \n",
" 2 458996 Female Yes 69 No NaN 0.0 \n",
" 3 459000 Male Yes 59 No Executive 11.0 \n",
" 4 459001 Female No 19 No Marketing NaN \n",
" \n",
" Spending_Score Family_Size Var_1 \n",
" 0 Low 1.0 Cat_6 \n",
" 1 Average 4.0 Cat_6 \n",
" 2 Low 1.0 Cat_6 \n",
" 3 High 2.0 Cat_6 \n",
" 4 Low 4.0 Cat_6 )"
]
},
"metadata": {},
"execution_count": 21
}
]
},
{
"cell_type": "code",
"source": [
"from sklearn.model_selection import train_test_split\n",
"from sklearn.preprocessing import LabelEncoder, StandardScaler\n",
"from sklearn.impute import SimpleImputer\n",
"from sklearn.linear_model import LogisticRegression\n",
"from sklearn.metrics import accuracy_score, classification_report\n",
"\n",
"# Preprocessing the data\n",
"def preprocess_data(data, is_train=True):\n",
" # Dropping ID column as it's not relevant\n",
" data = data.drop(columns=['ID'])\n",
"\n",
" # Handling missing values using SimpleImputer\n",
" imputer = SimpleImputer(strategy='most_frequent')\n",
" data[['Work_Experience', 'Family_Size']] = imputer.fit_transform(data[['Work_Experience', 'Family_Size']])\n",
"\n",
" # Encoding categorical variables\n",
" encoder = LabelEncoder()\n",
" data['Gender'] = encoder.fit_transform(data['Gender'])\n",
" data['Ever_Married'] = encoder.fit_transform(data['Ever_Married'])\n",
" data['Graduated'] = encoder.fit_transform(data['Graduated'])\n",
" data['Profession'] = encoder.fit_transform(data['Profession'].astype(str))\n",
" data['Spending_Score'] = encoder.fit_transform(data['Spending_Score'])\n",
" data['Var_1'] = encoder.fit_transform(data['Var_1'].astype(str))\n",
"\n",
" if is_train:\n",
" # Encode the target variable (Segmentation)\n",
" data['Segmentation'] = encoder.fit_transform(data['Segmentation'])\n",
"\n",
" return data\n",
"\n",
"# Preprocess train and test datasets\n",
"train_data_processed = preprocess_data(train_data)\n",
"test_data_processed = preprocess_data(test_data, is_train=False)\n",
"\n",
"# Splitting features and target variable for the train dataset\n",
"X_train = train_data_processed.drop(columns=['Segmentation'])\n",
"y_train = train_data_processed['Segmentation']\n",
"\n",
"# Standardizing the features\n",
"scaler = StandardScaler()\n",
"X_train_scaled = scaler.fit_transform(X_train)\n",
"X_test_scaled = scaler.transform(test_data_processed)\n",
"\n",
"# Applying Logistic Regression\n",
"log_reg = LogisticRegression(max_iter=500)\n",
"log_reg.fit(X_train_scaled, y_train)\n",
"\n",
"# Predicting on the train and test data\n",
"y_train_pred = log_reg.predict(X_train_scaled)\n",
"y_test_pred = log_reg.predict(X_test_scaled)\n",
"\n",
"# Calculating accuracy and classification report for train and test data\n",
"train_accuracy = accuracy_score(y_train, y_train_pred)\n",
"test_accuracy = accuracy_score(y_train[:len(y_test_pred)], y_test_pred)\n",
"\n",
"train_report = classification_report(y_train, y_train_pred, target_names=['A', 'B', 'C', 'D'])\n",
"test_report = classification_report(y_train[:len(y_test_pred)], y_test_pred, target_names=['A', 'B', 'C', 'D'])\n",
"\n",
"# Printing the results\n",
"print(f\"Train Accuracy: {train_accuracy}\")\n",
"print(f\"Test Accuracy: {test_accuracy}\")\n",
"print(\"\\nTrain Classification Report:\")\n",
"print(train_report)\n",
"print(\"\\nTest Classification Report:\")\n",
"print(test_report)\n",
"\n",
"\n"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "vZDYhCEbaDoS",
"outputId": "1409200c-897c-4f61-87ac-159e2a48a192"
},
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Train Accuracy: 0.4970252850768468\n",
"Test Accuracy: 0.26303768557289686\n",
"\n",
"Train Classification Report:\n",
" precision recall f1-score support\n",
"\n",
" A 0.41 0.44 0.42 1972\n",
" B 0.36 0.14 0.20 1858\n",
" C 0.48 0.61 0.54 1970\n",
" D 0.61 0.74 0.67 2268\n",
"\n",
" accuracy 0.50 8068\n",
" macro avg 0.47 0.48 0.46 8068\n",
"weighted avg 0.47 0.50 0.47 8068\n",
"\n",
"\n",
"Test Classification Report:\n",
" precision recall f1-score support\n",
"\n",
" A 0.27 0.28 0.28 670\n",
" B 0.23 0.10 0.14 585\n",
" C 0.24 0.31 0.27 629\n",
" D 0.28 0.34 0.31 743\n",
"\n",
" accuracy 0.26 2627\n",
" macro avg 0.26 0.26 0.25 2627\n",
"weighted avg 0.26 0.26 0.25 2627\n",
"\n"
]
}
]
},
{
"cell_type": "markdown",
"source": [
"Observations:\n",
"\n",
"- Accuracy(Training Data): 49.74%\n",
" - Overall: The model performs better for Class C and D, similar to the Naive Bayes classifier. Class B remains the most challenging to predict accurately.\n",
"-Accuracy(Test Data): 26.30%\n",
"\n",
"Comparison:\n",
"Logistic Regression:\n",
"\n",
"- Training Accuracy: 49.74%\n",
"- Test Accuracy: 26.30%\n",
"- Test F1-Score (Class C): 0.27\n",
"- Test F1-Score (Class D): 0.31\n",
"\n",
"Gaussian Naive Bayes:\n",
"\n",
"- Training Accuracy: 48.72%\n",
"- Test Accuracy: 26.87%\n",
"- Test F1-Score (Class C): 0.29\n",
"- Test F1-Score (Class D): 0.32\n",
"\n",
"Conclusion:\n",
"\n",
"- Gaussian Naive Bayes outperforms Logistic Regression for this dataset, especially in terms of test accuracy and the F1-scores for key classes (C and D).\n",
"\n",
"- Given the test set accuracy and F1-scores, Gaussian Naive Bayes is the better model for this particular problem."
],
"metadata": {
"id": "EYTcNlVzQjqh"
}
},
{
"cell_type": "markdown",
"source": [
"**Class_Weight(Balanced)**"
],
"metadata": {
"id": "RZUklXwGO63W"
}
},
{
"cell_type": "code",
"source": [
"# Re-loading the datasets and preprocessing\n",
"import pandas as pd\n",
"from sklearn.model_selection import train_test_split\n",
"from sklearn.preprocessing import LabelEncoder, StandardScaler\n",
"from sklearn.impute import SimpleImputer\n",
"from sklearn.linear_model import LogisticRegression\n",
"from sklearn.metrics import accuracy_score, classification_report\n",
"\n",
"# Paths to the files\n",
"train_file_path = 'Train.csv'\n",
"test_file_path = 'Test.csv'\n",
"\n",
"# Reading the train and test datasets\n",
"train_data = pd.read_csv(train_file_path)\n",
"test_data = pd.read_csv(test_file_path)\n",
"\n",
"# Preprocessing the data\n",
"def preprocess_data(data, is_train=True):\n",
" # Dropping ID column as it's not relevant\n",
" data = data.drop(columns=['ID'])\n",
"\n",
" # Handling missing values using SimpleImputer\n",
" imputer = SimpleImputer(strategy='most_frequent')\n",
" data[['Work_Experience', 'Family_Size']] = imputer.fit_transform(data[['Work_Experience', 'Family_Size']])\n",
"\n",
" # Encoding categorical variables\n",
" encoder = LabelEncoder()\n",
" data['Gender'] = encoder.fit_transform(data['Gender'])\n",
" data['Ever_Married'] = encoder.fit_transform(data['Ever_Married'])\n",
" data['Graduated'] = encoder.fit_transform(data['Graduated'])\n",
" data['Profession'] = encoder.fit_transform(data['Profession'].astype(str))\n",
" data['Spending_Score'] = encoder.fit_transform(data['Spending_Score'])\n",
" data['Var_1'] = encoder.fit_transform(data['Var_1'].astype(str))\n",
"\n",
" if is_train:\n",
" # Encode the target variable (Segmentation)\n",
" data['Segmentation'] = encoder.fit_transform(data['Segmentation'])\n",
"\n",
" return data\n",
"\n",
"# Preprocess train and test datasets\n",
"train_data_processed = preprocess_data(train_data)\n",
"test_data_processed = preprocess_data(test_data, is_train=False)\n",
"\n",
"# Splitting features and target variable for the train dataset\n",
"X_train = train_data_processed.drop(columns=['Segmentation'])\n",
"y_train = train_data_processed['Segmentation']\n",
"\n",
"# Standardizing the features\n",
"scaler = StandardScaler()\n",
"X_train_scaled = scaler.fit_transform(X_train)\n",
"X_test_scaled = scaler.transform(test_data_processed)\n",
"\n",
"# Rebuilding Logistic Regression models with different configurations\n",
"\n",
"# 1. Logistic Regression with class_weight='balanced'\n",
"log_reg_class_weight = LogisticRegression(max_iter=500, class_weight='balanced')\n",
"log_reg_class_weight.fit(X_train_scaled, y_train)\n",
"\n",
"\n",
"\n",
"# Predictions for both models on the train and test datasets\n",
"\n",
"# Class Weight = Balanced\n",
"y_train_pred_class_weight = log_reg_class_weight.predict(X_train_scaled)\n",
"y_test_pred_class_weight = log_reg_class_weight.predict(X_test_scaled)\n",
"\n",
"\n",
"\n",
"# Calculating accuracy and classification reports for both models\n",
"\n",
"# Class Weight = Balanced\n",
"train_accuracy_class_weight = accuracy_score(y_train, y_train_pred_class_weight)\n",
"test_accuracy_class_weight = accuracy_score(y_train[:len(y_test_pred_class_weight)], y_test_pred_class_weight)\n",
"train_report_class_weight = classification_report(y_train, y_train_pred_class_weight, target_names=['A', 'B', 'C', 'D'])\n",
"test_report_class_weight = classification_report(y_train[:len(y_test_pred_class_weight)], y_test_pred_class_weight, target_names=['A', 'B', 'C', 'D'])\n",
"\n",
"\n",
"\n",
"# Printing the results\n",
"\n",
"print(f\"Train Accuracy Class Weight: {train_accuracy_class_weight}\")\n",
"print(f\"Test Accuracy Class Weight: {test_accuracy_class_weight}\")\n",
"\n",
"print(\"\\nTrain Classification Report Class Weight:\")\n",
"print(train_report_class_weight)\n",
"print(\"\\nTest Classification Report Class Weight:\")\n",
"print(test_report_class_weight)\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n",
"\n"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "TloB1IH2O2Ts",
"outputId": "4c7662e4-53d7-487a-fc8d-f84c99748c0f"
},
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Train Accuracy Class Weight: 0.5012394645513139\n",
"Test Accuracy Class Weight: 0.26417967263037684\n",
"\n",
"Train Classification Report Class Weight:\n",
" precision recall f1-score support\n",
"\n",
" A 0.42 0.45 0.44 1972\n",
" B 0.36 0.21 0.26 1858\n",
" C 0.49 0.58 0.53 1970\n",
" D 0.64 0.72 0.67 2268\n",
"\n",
" accuracy 0.50 8068\n",
" macro avg 0.48 0.49 0.48 8068\n",
"weighted avg 0.48 0.50 0.49 8068\n",
"\n",
"\n",
"Test Classification Report Class Weight:\n",
" precision recall f1-score support\n",
"\n",
" A 0.28 0.29 0.28 670\n",
" B 0.22 0.14 0.17 585\n",
" C 0.25 0.29 0.27 629\n",
" D 0.29 0.32 0.30 743\n",
"\n",
" accuracy 0.26 2627\n",
" macro avg 0.26 0.26 0.26 2627\n",
"weighted avg 0.26 0.26 0.26 2627\n",
"\n"
]
}
]
},
{
"cell_type": "markdown",
"source": [
"Observations:\n",
"\n",
"Comparing the Results:\n",
"Training Accuracy:\n",
"- Without class_weight and multi_class: 49.74%\n",
"- With class_weight='balanced': 50.12%\n",
"- With multi_class='ovr': 48.86%\n",
"\n",
"Analysis:\n",
"\n",
"All models perform similarly on training accuracy, with class_weight='balanced' achieving the highest score, but the differences are marginal.\n",
"\n",
"Test Accuracy:\n",
"- Without class_weight and multi_class: 26.30%\n",
"- With class_weight='balanced': 26.42%\n",
"\n",
"\n",
"Analysis:\n",
"\n",
"The test accuracy is very similar across all models. However, it's important to remember that accuracy is not always the best measure for imbalanced data (which is the case here).\n",
"\n",
"Minority Class (Class B) Performance:\n",
"Recall on Test Data for Class B:\n",
"- Without class_weight and multi_class: 0.10\n",
"\n",
"\n",
"Analysis:\n",
"\n",
"- The class_weight='balanced' model clearly does better in improving recall for the minority class B\n",
"\n",
"F1-Score on Test Data for Class B:\n",
"- Without class_weight and multi_class: 0.13\n",
"- With class_weight='balanced': 0.17\n",
"\n",
"Analysis:\n",
"\n",
"Class-weight balancing improves the F1-score for B compared to unweighted\n",
"\n",
"Conclusion:\n",
"- Introducing class weights helps slightly improve recall for the minority class (Class B), but the overall accuracy and F1-scores for other classes (A, C, D) remain similar.\n",
"- F1-score remains a better evaluation metric compared to accuracy when working with imbalanced data. Accuracy might suggest the model performs well, but it can mask poor performance on minority classes."
],
"metadata": {
"id": "UA5VLX3bUQML"
}
},
{
"cell_type": "markdown",
"source": [
"**Multiclass=ovr**"
],
"metadata": {
"id": "4YK-BQAx7Pro"
}
},
{
"cell_type": "code",
"source": [
"# Logistic Regression with multi_class='ovr'\n",
"log_reg_multiclass_ovr = LogisticRegression(max_iter=500, multi_class='ovr')\n",
"log_reg_multiclass_ovr.fit(X_train_scaled, y_train)\n",
"\n",
"# Multi-class = OvR\n",
"y_train_pred_multiclass_ovr = log_reg_multiclass_ovr.predict(X_train_scaled)\n",
"y_test_pred_multiclass_ovr = log_reg_multiclass_ovr.predict(X_test_scaled)\n",
"\n",
"# Multi-class = OvR\n",
"train_accuracy_multiclass_ovr = accuracy_score(y_train, y_train_pred_multiclass_ovr)\n",
"test_accuracy_multiclass_ovr = accuracy_score(y_train[:len(y_test_pred_multiclass_ovr)], y_test_pred_multiclass_ovr)\n",
"train_report_multiclass_ovr = classification_report(y_train, y_train_pred_multiclass_ovr, target_names=['A', 'B', 'C', 'D'])\n",
"test_report_multiclass_ovr = classification_report(y_train[:len(y_test_pred_multiclass_ovr)], y_test_pred_multiclass_ovr, target_names=['A', 'B', 'C', 'D'])\n",
"\n",
"print(f\"Train Accuracy Multi-class_OvR: {train_accuracy_multiclass_ovr}\")\n",
"print(f\"Test Accuracy Multi-class_OvR: {test_accuracy_multiclass_ovr}\")\n",
"print(\"\\nTrain Classification Report Multi-class_OvR:\")\n",
"print(train_report_multiclass_ovr)\n",
"print(\"\\nTest Classification Report Multi-class_OvR:\")\n",
"print(test_report_multiclass_ovr)"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "6QUTnYcR65DN",
"outputId": "929be307-5da8-40d6-9645-2a2677f685ef"
},
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Train Accuracy Multi-class_OvR: 0.4885969261279127\n",
"Test Accuracy Multi-class_OvR: 0.2554244385230301\n",
"\n",
"Train Classification Report Multi-class_OvR:\n",
" precision recall f1-score support\n",
"\n",
" A 0.40 0.42 0.41 1972\n",
" B 0.33 0.07 0.12 1858\n",
" C 0.47 0.63 0.54 1970\n",
" D 0.59 0.76 0.66 2268\n",
"\n",
" accuracy 0.49 8068\n",
" macro avg 0.45 0.47 0.43 8068\n",
"weighted avg 0.45 0.49 0.45 8068\n",
"\n",
"\n",
"Test Classification Report Multi-class_OvR:\n",
" precision recall f1-score support\n",
"\n",
" A 0.25 0.26 0.25 670\n",
" B 0.20 0.05 0.08 585\n",
" C 0.24 0.33 0.28 629\n",
" D 0.28 0.36 0.31 743\n",
"\n",
" accuracy 0.26 2627\n",
" macro avg 0.24 0.25 0.23 2627\n",
"weighted avg 0.25 0.26 0.24 2627\n",
"\n"
]
},
{
"output_type": "stream",
"name": "stderr",
"text": [
"/usr/local/lib/python3.10/dist-packages/sklearn/linear_model/_logistic.py:1256: FutureWarning: 'multi_class' was deprecated in version 1.5 and will be removed in 1.7. Use OneVsRestClassifier(LogisticRegression(..)) instead. Leave it to its default value to avoid this warning.\n",
" warnings.warn(\n"
]
}
]
},
{
"cell_type": "code",
"source": [
"# Logistic Regression with multi_class='ovr'\n",
"log_reg_multiclass_mn = LogisticRegression(max_iter=500, multi_class='multinomial', solver='lbfgs',)\n",
"log_reg_multiclass_mn.fit(X_train_scaled, y_train)\n",
"\n",
"# Multi-class = OvR\n",
"y_train_pred_multiclass_mn = log_reg_multiclass_mn.predict(X_train_scaled)\n",
"y_test_pred_multiclass_mn = log_reg_multiclass_mn.predict(X_test_scaled)\n",
"\n",
"# Multi-class = OvR\n",
"train_accuracy_multiclass_mn = accuracy_score(y_train, y_train_pred_multiclass_mn)\n",
"test_accuracy_multiclass_mn = accuracy_score(y_train[:len(y_test_pred_multiclass_mn)], y_test_pred_multiclass_mn)\n",
"train_report_multiclass_mn = classification_report(y_train, y_train_pred_multiclass_ovr, target_names=['A', 'B', 'C', 'D'])\n",
"test_report_multiclass_mn = classification_report(y_train[:len(y_test_pred_multiclass_mn)], y_test_pred_multiclass_mn, target_names=['A', 'B', 'C', 'D'])\n",
"\n",
"print(f\"Train Accuracy Multi-class_mn: {train_accuracy_multiclass_mn}\")\n",
"print(f\"Test Accuracy Multi-class_mn: {test_accuracy_multiclass_mn}\")\n",
"print(\"\\nTrain Classification Report Multi-class_mn:\")\n",
"print(train_report_multiclass_mn)\n",
"print(\"\\nTest Classification Report Multi-class_mn:\")\n",
"print(test_report_multiclass_mn)"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "KpcU989c7VJ4",
"outputId": "538d3b62-c89b-4aae-f20f-8b197204668b"
},
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Train Accuracy Multi-class_mn: 0.4970252850768468\n",
"Test Accuracy Multi-class_mn: 0.26303768557289686\n",
"\n",
"Train Classification Report Multi-class_mn:\n",
" precision recall f1-score support\n",
"\n",
" A 0.40 0.42 0.41 1972\n",
" B 0.33 0.07 0.12 1858\n",
" C 0.47 0.63 0.54 1970\n",
" D 0.59 0.76 0.66 2268\n",
"\n",
" accuracy 0.49 8068\n",
" macro avg 0.45 0.47 0.43 8068\n",
"weighted avg 0.45 0.49 0.45 8068\n",
"\n",
"\n",
"Test Classification Report Multi-class_mn:\n",
" precision recall f1-score support\n",
"\n",
" A 0.27 0.28 0.28 670\n",
" B 0.23 0.10 0.14 585\n",
" C 0.24 0.31 0.27 629\n",
" D 0.28 0.34 0.31 743\n",
"\n",
" accuracy 0.26 2627\n",
" macro avg 0.26 0.26 0.25 2627\n",
"weighted avg 0.26 0.26 0.25 2627\n",
"\n"
]
},
{
"output_type": "stream",
"name": "stderr",
"text": [
"/usr/local/lib/python3.10/dist-packages/sklearn/linear_model/_logistic.py:1247: FutureWarning: 'multi_class' was deprecated in version 1.5 and will be removed in 1.7. From then on, it will always use 'multinomial'. Leave it to its default value to avoid this warning.\n",
" warnings.warn(\n"
]
}
]
},
{
"cell_type": "markdown",
"source": [
"**Cost of Misclassification with Logistic Regression**"
],
"metadata": {
"id": "zrYssp7UqyOk"
}
},
{
"cell_type": "markdown",
"source": [
"The dataset contains customer segmentation data with four classes (A, B, C, D). Each customer is placed in one of these segments based on various features (e.g., age, profession, spending score, etc.).\n",
"\n",
"Segments and Their Priorities (Business Scenario):\n",
"- Segment A: High-value customers (misclassifying them is costly).\n",
"- Segment B: Low-value customers (misclassifications are less costly).\n",
"- Segment C: Potential long-term customers (medium misclassification cost).\n",
"- Segment D: Regular customers (medium misclassification cost).\n",
"\n",
"Assigned Costs of Misclassification:\n",
"- Misclassifying a Segment A customer (e.g., as B, C, or D): Cost = 5.\n",
"- Misclassifying a Segment C or D customer: Cost = 2.\n",
"- Misclassifying a Segment B customer: Cost = 1 (lower cost since they are low-priority customers)."
],
"metadata": {
"id": "e2Lmb3OurCKN"
}
},
{
"cell_type": "code",
"source": [
"import pandas as pd\n",
"from sklearn.preprocessing import StandardScaler, LabelEncoder\n",
"from sklearn.model_selection import train_test_split\n",
"\n",
"# Load your dataset\n",
"# train_data = pd.read_csv('path_to_train.csv')\n",
"\n",
"# Define features (X) and target (y)\n",
"X = train_data.drop(columns=['Segmentation', 'ID']) # Exclude target and ID column\n",
"y = train_data['Segmentation']\n",
"\n",
"# Encode categorical variables and target variable\n",
"encoder = LabelEncoder()\n",
"y = encoder.fit_transform(y)\n",
"\n",
"# Scale the numeric features\n",
"scaler = StandardScaler()\n",
"X_scaled = scaler.fit_transform(X)\n",
"\n",
"# Split data into training and test sets\n",
"X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, test_size=0.2, random_state=42)"
],
"metadata": {
"id": "_UmwLAafCRbA"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"# Re-loading the datasets and preprocessing\n",
"import pandas as pd\n",
"from sklearn.model_selection import train_test_split\n",
"from sklearn.preprocessing import LabelEncoder, StandardScaler\n",
"from sklearn.impute import SimpleImputer\n",
"from sklearn.linear_model import LogisticRegression\n",
"from sklearn.metrics import accuracy_score, classification_report, confusion_matrix\n",
"\n",
"# Paths to the files\n",
"train_file_path = 'Train.csv'\n",
"test_file_path = 'Test.csv'\n",
"\n",
"# Reading the train and test datasets\n",
"train_data = pd.read_csv(train_file_path)\n",
"test_data = pd.read_csv(test_file_path)\n",
"\n",
"# Preprocessing the data\n",
"def preprocess_data(data, is_train=True):\n",
" # Dropping ID column as it's not relevant\n",
" data = data.drop(columns=['ID'])\n",
"\n",
" # Handling missing values using SimpleImputer\n",
" imputer = SimpleImputer(strategy='most_frequent')\n",
" data[['Work_Experience', 'Family_Size']] = imputer.fit_transform(data[['Work_Experience', 'Family_Size']])\n",
"\n",
" # Encoding categorical variables\n",
" encoder = LabelEncoder()\n",
" data['Gender'] = encoder.fit_transform(data['Gender'])\n",
" data['Ever_Married'] = encoder.fit_transform(data['Ever_Married'])\n",
" data['Graduated'] = encoder.fit_transform(data['Graduated'])\n",
" data['Profession'] = encoder.fit_transform(data['Profession'].astype(str))\n",
" data['Spending_Score'] = encoder.fit_transform(data['Spending_Score'])\n",
" data['Var_1'] = encoder.fit_transform(data['Var_1'].astype(str))\n",
"\n",
" if is_train:\n",
" # Encode the target variable (Segmentation)\n",
" data['Segmentation'] = encoder.fit_transform(data['Segmentation'])\n",
"\n",
" return data\n",
"\n",
"# Preprocess train and test datasets\n",
"train_data_processed = preprocess_data(train_data)\n",
"test_data_processed = preprocess_data(test_data, is_train=False)\n",
"\n",
"# Splitting features and target variable for the train dataset\n",
"X_train = train_data_processed.drop(columns=['Segmentation'])\n",
"y_train = train_data_processed['Segmentation']\n",
"\n",
"# Standardizing the features\n",
"scaler = StandardScaler()\n",
"X_train_scaled = scaler.fit_transform(X_train)\n",
"X_test_scaled = scaler.transform(test_data_processed)\n",
"\n",
"\n",
"# Train Logistic Regression with class_weight and ovr\n",
"log_reg_ovr_weighted = LogisticRegression(class_weight='balanced', multi_class='ovr', solver='lbfgs', max_iter=500)\n",
"log_reg_ovr_weighted.fit(X_train, y_train)\n",
"\n",
"# Predict on the train data (using X_train_scaled)\n",
"y_pred_ovr_weighted = log_reg_ovr_weighted.predict(X_train_scaled)\n",
"\n",
"# Calculate accuracy and classification report\n",
"accuracy_ovr_weighted = accuracy_score(y_train, y_pred_ovr_weighted)\n",
"report_ovr_weighted = classification_report(y_train, y_pred_ovr_weighted, target_names=['A', 'B', 'C', 'D'])\n",
"print(\"Accuracy (OvR with class_weight='balanced'):\", accuracy_ovr_weighted)\n",
"print(\"Classification Report (OvR with class_weight='balanced'):\")\n",
"print(report_ovr_weighted)\n",
"\n",
"# Confusion Matrix\n",
"cm_ovr_weighted = confusion_matrix(y_train, y_pred_ovr_weighted)\n",
"print(\"Confusion Matrix (OvR with class_weight='balanced'):\")\n",
"print(cm_ovr_weighted)\n",
"\n",
"# Train Logistic Regression with class_weight and multinomial\n",
"log_reg_multinomial_weighted = LogisticRegression(class_weight='balanced', multi_class='multinomial', solver='lbfgs', max_iter=500)\n",
"log_reg_multinomial_weighted.fit(X_train, y_train)\n",
"\n",
"# Predict on the train data (using X_train_scaled)\n",
"y_pred_multinomial_weighted = log_reg_multinomial_weighted.predict(X_train_scaled)\n",
"\n",
"# Calculate accuracy and classification report\n",
"accuracy_multinomial_weighted = accuracy_score(y_train, y_pred_multinomial_weighted)\n",
"report_multinomial_weighted = classification_report(y_train, y_pred_multinomial_weighted, target_names=['A', 'B', 'C', 'D'])\n",
"print(\"Accuracy (Multinomial with class_weight='balanced'):\", accuracy_multinomial_weighted)\n",
"print(\"Classification Report (Multinomial with class_weight='balanced'):\")\n",
"print(report_multinomial_weighted)\n",
"\n",
"# Confusion Matrix\n",
"cm_multinomial_weighted = confusion_matrix(y_train, y_pred_multinomial_weighted)\n",
"print(\"Confusion Matrix (Multinomial with class_weight='balanced'):\")\n",
"print(cm_multinomial_weighted)\n",
"\n",
"import numpy as np\n",
"\n",
"# Define the cost matrix (higher values represent higher costs of misclassification)\n",
"cost_matrix = np.array([[0, 5, 5, 5], # Misclassifying A\n",
" [1, 0, 1, 1], # Misclassifying B\n",
" [2, 2, 0, 2], # Misclassifying C\n",
" [2, 2, 2, 0]]) # Misclassifying D\n",
"\n",
"# Calculate the total misclassification cost for both models\n",
"misclassification_cost_ovr = np.sum(cm_ovr_weighted * cost_matrix)\n",
"misclassification_cost_multinomial = np.sum(cm_multinomial_weighted * cost_matrix)\n",
"\n",
"print(f\"Total Misclassification Cost (OvR): {misclassification_cost_ovr}\")\n",
"print(f\"Total Misclassification Cost (Multinomial): {misclassification_cost_multinomial}\")"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "lH6jLKbs6KuI",
"outputId": "7da52803-4e83-43b8-b75c-0679eede0140"
},
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stderr",
"text": [
"/usr/local/lib/python3.10/dist-packages/sklearn/linear_model/_logistic.py:1256: FutureWarning: 'multi_class' was deprecated in version 1.5 and will be removed in 1.7. Use OneVsRestClassifier(LogisticRegression(..)) instead. Leave it to its default value to avoid this warning.\n",
" warnings.warn(\n",
"/usr/local/lib/python3.10/dist-packages/sklearn/base.py:493: UserWarning: X does not have valid feature names, but LogisticRegression was fitted with feature names\n",
" warnings.warn(\n",
"/usr/local/lib/python3.10/dist-packages/sklearn/linear_model/_logistic.py:1247: FutureWarning: 'multi_class' was deprecated in version 1.5 and will be removed in 1.7. From then on, it will always use 'multinomial'. Leave it to its default value to avoid this warning.\n",
" warnings.warn(\n"
]
},
{
"output_type": "stream",
"name": "stdout",
"text": [
"Accuracy (OvR with class_weight='balanced'): 0.40865146256817053\n",
"Classification Report (OvR with class_weight='balanced'):\n",
" precision recall f1-score support\n",
"\n",
" A 0.34 0.41 0.37 1972\n",
" B 0.31 0.32 0.31 1858\n",
" C 0.55 0.07 0.13 1970\n",
" D 0.50 0.78 0.61 2268\n",
"\n",
" accuracy 0.41 8068\n",
" macro avg 0.42 0.39 0.35 8068\n",
"weighted avg 0.43 0.41 0.36 8068\n",
"\n",
"Confusion Matrix (OvR with class_weight='balanced'):\n",
"[[ 801 308 29 834]\n",
" [ 689 593 69 507]\n",
" [ 469 945 142 414]\n",
" [ 406 82 19 1761]]\n",
"Accuracy (Multinomial with class_weight='balanced'): 0.40282597917699553\n",
"Classification Report (Multinomial with class_weight='balanced'):\n",
" precision recall f1-score support\n",
"\n",
" A 0.32 0.60 0.42 1972\n",
" B 0.30 0.26 0.28 1858\n",
" C 0.58 0.05 0.10 1970\n",
" D 0.58 0.65 0.61 2268\n",
"\n",
" accuracy 0.40 8068\n",
" macro avg 0.44 0.39 0.35 8068\n",
"weighted avg 0.45 0.40 0.36 8068\n",
"\n",
"Confusion Matrix (Multinomial with class_weight='balanced'):\n",
"[[1185 239 19 529]\n",
" [1039 489 40 290]\n",
" [ 763 839 104 264]\n",
" [ 715 64 17 1472]]\n",
"Total Misclassification Cost (OvR): 11790\n",
"Total Misclassification Cost (Multinomial): 10628\n"
]
},
{
"output_type": "stream",
"name": "stderr",
"text": [
"/usr/local/lib/python3.10/dist-packages/sklearn/linear_model/_logistic.py:469: ConvergenceWarning: lbfgs failed to converge (status=1):\n",
"STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.\n",
"\n",
"Increase the number of iterations (max_iter) or scale the data as shown in:\n",
" https://scikit-learn.org/stable/modules/preprocessing.html\n",
"Please also refer to the documentation for alternative solver options:\n",
" https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression\n",
" n_iter_i = _check_optimize_result(\n",
"/usr/local/lib/python3.10/dist-packages/sklearn/base.py:493: UserWarning: X does not have valid feature names, but LogisticRegression was fitted with feature names\n",
" warnings.warn(\n"
]
}
]
},
{
"cell_type": "markdown",
"source": [
"Observations from the result:\n",
"\n",
"Comparison Between OvR and Multinomial Models:\n",
"\n",
"1. Accuracy:\n",
"\n",
"Both models have almost identical accuracy (40.87% for OvR and 40.28% for Multinomial). This indicates that overall accuracy is not the key differentiator for these models.\n",
"\n",
"2. Class-Specific Performance:\n",
"\n",
"Class A: The Multinomial model performs better in terms of recall (0.60 vs. 0.41 in OvR), meaning it can identify more class A customers correctly.\n",
"Class B: Both models struggle with class B, but OvR has slightly better recall (0.32 vs. 0.26), though precision is similar.\n",
"Class C: Both models fail to classify class C correctly, with very low recall (0.07 for OvR and 0.05 for Multinomial).\n",
"Class D: The OvR model performs better in class D recall (0.78 vs. 0.65), indicating that it is more successful at identifying class D customers, though precision is similar.\n",
"\n",
"3. Misclassification Cost:\n",
"\n",
"OvR Total Misclassification Cost: 11790\n",
"Multinomial Total Misclassification Cost: 10628\n",
"The Multinomial model has a lower misclassification cost (10628 vs. 11790), indicating that it does a better job at minimizing the most costly errors. Specifically, the improvement in recall for class A (high-cost class) contributes to reducing the overall misclassification cost.\n",
"\n",
"Conclusion:\n",
"Strengths of Multinomial Model:\n",
"\n",
"- Lower misclassification cost: The Multinomial model has a lower total cost of misclassification, making it more cost-effective in real-world scenarios.\n",
"Better recall for class A: The Multinomial model is better at identifying class A customers, which is important if class A represents high-value customers.\n",
"\n",
"Strengths of OvR Model:\n",
"\n",
"- Better recall for class D: The OvR model performs better in class D recall, meaning it is better at identifying regular or majority customers.\n",
"Overall: The Multinomial Logistic Regression model with class_weight='balanced' is slightly better in this case due to its lower misclassification cost and better recall for high-value class A customers. However, both models struggle significantly with class B and C, and neither is a clear winner in terms of accuracy."
],
"metadata": {
"id": "5rYD8yObFttS"
}
}
]
}