Master Logistic Regression with Python: Step-by-Step Guide and Code Examples

Master Logistic Regression with Python: Step-by-Step Guide and Code Examples

Introduction: Logistic Regression is a statistical method for analyzing a dataset in which there are one or more independent variables that determine an outcome. The outcome is measured with a dichotomous variable (0 or 1). In this tutorial, we will walk through the basics of logistic regression, implement it using Python, and apply it to a real-world dataset.

Prerequisites:

  • Python 3.x

  • Numpy

  • Pandas

  • Matplotlib

  • Scikit-learn

Steps: (Skipped EDA and others to keep context concise)

  1. Import necessary libraries

  2. Load and preprocess the dataset

  3. Split the dataset into training and test sets

  4. Implement logistic regression

  5. Train and evaluate the model

  6. Visualize the results

  7. Import necessary libraries:

pythonCopy codeimport numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report
  1. Load and preprocess the dataset: For this tutorial, we'll use the Titanic dataset which can be found at raw.githubusercontent.com/datasciencedojo/d..
pythonCopy codeurl = "https://raw.githubusercontent.com/datasciencedojo/datasets/master/titanic.csv"
data = pd.read_csv(url)

# Drop irrelevant columns
data = data.drop(['Name', 'Ticket', 'Cabin'], axis=1)

# Encode categorical variables
data['Sex'] = data['Sex'].map({'male': 0, 'female': 1})
data['Embarked'] = data['Embarked'].map({'C': 0, 'Q': 1, 'S': 2})

# Fill missing values
data['Age'].fillna(data['Age'].median(), inplace=True)
data['Embarked'].fillna(data['Embarked'].mode()[0], inplace=True)

# Display the processed dataset
print(data.head())
  1. Split the dataset into training and test sets:
pythonCopy codeX = data.drop('Survived', axis=1)
y = data['Survived']

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
  1. Implement logistic regression:
pythonCopy codelog_reg = LogisticRegression(solver='liblinear')
  1. Train and evaluate the model:
pythonCopy codelog_reg.fit(X_train, y_train)

y_pred = log_reg.predict(X_test)

accuracy = accuracy_score(y_test, y_pred)
confusion = confusion_matrix(y_test, y_pred)
report = classification_report(y_test, y_pred)

print("Accuracy: ", accuracy)
print("Confusion Matrix: \n", confusion)
print("Classification Report: \n", report)
  1. Visualize the results:
pythonCopy codeplt.figure(figsize=(8, 6))
plt.scatter(X_test['Age'], y_test, color='blue', label='Actual')
plt.scatter(X_test['Age'], y_pred, color='red', label='Predicted', marker='x')
plt.xlabel('Age')
plt.ylabel('Survived')
plt.legend()
plt.show()

In this tutorial, we have gone through the basics of logistic regression, implemented it using Python, and applied it to the Titanic dataset. You can experiment with different datasets and explore various options provided by the LogisticRegression class in scikit-learn for tuning the model.