Usage of enumerate() with python list

enumerate() is a useful function to make an iterator when used with a for loop. Here we explain different ways of using
enumerate() using a python list. enumerate() acts as an iterator yielding a tuple (index,element)when applied on a list.

Continue reading “Usage of enumerate() with python list”

Advertisements

Running Jupyter Notebook on a remote server

With a command-line interface to the server, it is often hard to quickly scan through the contents on a server. This can be circumvented using jupyter-lab (or jupyter notebook) running on the server and accessing it using a client machine. I presume you have already installed jupyter-lab (or jupyter-notebook) on server. Jupyter-lab is a better option as it comes with a file-navigator, spread-sheet viewer (faster than excell, reminds me of sublime text) and an image-viewer. Check out this video for the latest feature updates in jupyter-lab.

Continue reading “Running Jupyter Notebook on a remote server”

Handling missing values in a Dataset before training

How to impute missing values in a dataset before feeding to a classifier is often a difficult decision. Imputing with a wrong value can significantly skew the data and result in wrong classifier. The ideal solution is to get a clean data set without any NULL values but then, we might have to throw out most data. There are no perfect workarounds as most classifiers are built based on the information from data and lack thereof results in the wrong classifier. Continue reading “Handling missing values in a Dataset before training”

Extracting top feature names for a trained classifier in order for sci-kit learn

Post describes how to extract top feature names from a supervised learning classifier in sklearn.

Note: The training dataset X_train and y_train are pandas dataframe with column names.

After fitting/training a classifier clf, the scoring for features can be accessed (method varies depending on the classifier used).

  • For example, for logistic regression it is the magnitude of the coefficients and can be accessed as clf.coef_
  • For DecisionTree, it is clf.feature_importances_

Sort the scores in descending order using np.argsort() and pass it as an index to the column names of X_train.columns.


# For Decision Tree classifier

from sklearn.tree import DecisionTreeClassifier
import numpy as np

clf = DecisionTreeClassifier(random_state=42)
clf.fit(X_train, y_train)

importances = clf.feature_importances_

# printing top 5 features of fitted classifier
print (X_train.columns[(np.argsort(importances)[::-1])][:5])
OR
print(sorted(zip(X_train.columns,importances),key=lambda x: x[1])[::-1]