20 minute read

Introduction: What is Bash and why do we need it?

Bash is an useful asset for any data scientist providing an easy way to navigate the file system, create and edit files, and even run commands. As a scripting language, bash is useful for automating repetitive tasks, saving time and reducing the risk of errors. Bash scripts can be written to execute multiple commands in sequence by using a scripting language.

I will try to convince you to use it by giving a few reasons why it’s important, particularly for data scientists to know it:

  • Efficient Data Processing: It allows you to process large datasets quickly and efficiently. Bash scripts can be used to automate repetitive tasks, such as downloading data, cleaning data, and running analysis. This allows data scientists to focus on the more important tasks of analyzing and interpreting their data.

  • A better collaboration experience: Bash scripts can be shared with other members of the team, making it easier to collaborate and reproduce. It is also possible to version control Bash scripts using Git, which allows team members to collaborate and track changes easily. This makes Bash a powerful tool for data scientists to use in their work.

  • More Control and Flexibility: Bash scripts provide a high degree of control and flexibility over data processing tasks. By using Bash, data scientists can quickly and easily modify their scripts to adapt to new data or processing requirements.

  • Access to a Rich Set of Tools: Bash scripts can be used to access a wide variety of tools, including Python, R, and SQL. By combining Bash with other tools, data scientists can create more complex and sophisticated data processing pipelines.

Altogether, having an understanding of Bash is invaluable. In this blog post, I will cover the basics of Bash for data scientists, including essential commands, scripting, and working with Conda environments. We’ll also explore more advanced Bash topics that can be useful for data scientists, such as piping, redirection, and process substitution. By the end of this post, you’ll have a solid understanding of how Bash can be used to streamline your data science workflow, allowing you to focus on what really matters: analyzing and interpreting your data.

Basic Bash Commands

Before I move on to topics like Bash scripting and advanced topics, it is important to first examine some fundamental Bash commands that are crucial for data scientists when performing routine tasks. They will be able to interact with the command-line area using these commands to explore the file system, work with files, and run different programs. These are very efficient tools that can increase productivity when handling huge datasets or challenging evaluation methods.

I’ll go through several basic Bash commands that every data scientist should be familiar with. I demonstrate how to use the file system and create or edit files. At the end of this blog, you’ll have a clear understanding on the essentials of Bash and how to apply these commands to streamline your data science workflow.

Let us begin by navigating the file system, which is the most fundamental task in Bash. This task comes with a set of commands. The most important ones are:

  • cd changes directories (e.g. cd folder changes the current directory to the folder folder)
  • pwd prints the current working directory (e.g. pwd prints the current working directory)
  • mkdir creates a directory (e.g. mkdir folder creates a directory named folder)
  • ls lists files and directories in the current directory. There are many options that can be used to customize the output of ls, including the ability to sort the output, display file sizes and permissions, and show hidden files. (e.g. ls -l displays detailed information about files and directories in the current directory)
  • rm removes a directory or file. (e.g. rm -r removes a directory and all of its contents recursively)
  • touch creates a file (e.g. touch file.txt creates a file named file.txt)
  • mv moves a file or directory (e.g. mv file.txt folder/ moves the file file.txt to the folder folder)
  • cp copies a file or directory (e.g. cp file.txt folder/ copies the file file.txt to the folder folder)
  • cat prints the contents of a file (e.g. cat file.txt prints the contents of the file file.txt)
  • nano opens a file in the nano editor (e.g. nano file.txt opens the file file.txt in the nano editor)
  • vim opens a file in the vim editor (e.g. vim file.txt opens the file file.txt in the vim editor)
  • man displays the manual page for a command (e.g. man ls displays the manual page for the ls command)
  • history displays a list of previously executed commands
  • clear clears the terminal screen
  • exit exits the terminal

These commands allow you to browse your file system and create, find, modify the files and folders you require.

When working with data, it is frequently required to rapidly and efficiently read and update files. Data scientists can save time and enhance productivity by using Bash commands such as cat, nano, and vim to easily inspect and modify files without having to open a separate editor or GUI.

To automate workflows and simplify complex activities, Bash commands can be used in conjunction with other tools such as version control systems such as Git. Bash commands, for example, can be used in Git hooks to automate code testing, file formatting, and report generation.

Overall, data scientists can explore the file system, modify files and directories, and quickly examine and edit files from the command line by knowing these basic Bash commands, streamlining their workflows and enhancing productivity. Data scientists can also boost their productivity and spend more time on the crucial duties of analyzing data and getting insights by incorporating Bash into their workflow.

Bash scripting

Let’s get started using Bash scripting. A Bash script is the act of writing a group of commands in a file that may be run as a single entity. Bash scripts can be used to automate routine tasks such as data processing and file manipulation. Data scientists can save a lot of time and increase productivity by employing bash scripting.

Bash scripts are written in Bash and can include conditional statements, loops, and variables to automate complex processes. A Bash script, for example, might be used to automate the process of cleaning and formatting a large dataset, which would otherwise require significant manual effort. This also enables data scientists to create customized workflows specific to their needs, thus automating repetitive operations, saving them time to focus on more important tasks such as data analysis or algorithm development. Furthermore, by automating operations with Bash scripts, data scientists can limit the risk of human mistake and ensure that tasks are executed consistently and accurately every time.

Let’s look at an example. Consider you are a data scientist with a dataset that has to be cleaned and formatted. You may automate this process by writing a Bash script. To delete undesired characters or columns, replace missing values, and format the data in a uniform manner, the script could use Bash functions such as:

  • sed: for removing lines. e.g. sed -i '1d' data.csv removes the first line (header) of the CSV file
  • awk: awk -F, '{print $1}' data.csv prints the first column of the CSV file
  • grep: grep -v 'NA' data.csv removes all lines containing the string ‘NA’ from the CSV file
  • cut: cut -d, -f1 data.csv prints the first column of the CSV file
  • echo: echo "Hello World" prints the string ‘Hello World’
  • tr: tr -d '\r' < data.csv > data.csv removes all carriage returns from the CSV file
  • sort: sort -u data.csv sorts the CSV file and removes duplicates
  • uniq: uniq -u data.csv removes duplicate lines from the CSV file
  • paste: paste -d, data1.csv data2.csv pastes the two CSV files together, separating them with a comma
  • join: join -t, data1.csv data2.csv joins the two CSV files together, separating them with a comma
  • wc: wc -l data.csv counts the number of lines in the CSV file
  • head: head -n 10 data.csv prints the first 10 lines of the CSV file
  • tail: tail -n 10 data.csv prints the last 10 lines of the CSV file
  • find: find . -name "*.csv" finds all CSV files in the current directory and its subdirectories
  • xargs: find . -name "*.csv" | xargs wc -l counts the number of lines in all CSV files in the current directory and its subdirectories
  • yargs: find . -name "*.csv" | yargs wc -l counts the number of lines in all CSV files in the current directory and its subdirectories
  • zcat: zcat data.csv.gz | wc -l counts the number of lines in the gzipped CSV file
  • gzip: gzip data.csv gzips the CSV file
  • gunzip: gunzip data.csv.gz unzips the gzipped CSV file
  • bzip2: bzip2 data.csv bzips the CSV file
  • bunzip2: bunzip2 data.csv.bz2 unzips the bzipped CSV file
  • tar: tar -cvf data.tar data.csv creates a tar archive of the CSV file or tar -xvf data.tar extracts the tar archive

The dataset might then be cleaned and formatted using this script. This workflow may then be readily adjusted and customized as needed, making it a flexible and powerful tool for data scientists. This would save you a significant amount of time and effort, allowing you to concentrate on the more crucial process of analyzing data and developing insights.

Overall, Bash scripting is a must-have tool for data scientists looking to automate tedious operations and streamline workflows. By learning Bash scripting, data scientists may save time, eliminate errors, and focus on the vital duties of evaluating data and providing insights.

Data Cleaning Script:

A bash script to automate the process of cleaning and formatting a large dataset might also be used. To delete undesired characters or columns, replace missing values, and format the data in a uniform manner, Bash functions such as sed, awk, and grep are used.

Here is an example of such a bash script:

#!/bin/bash

# Remove the first line (header) of the CSV file
sed -i '1d' data.csv

# Replace all missing values with "NaN"
sed -i 's/, ,/, NaN,/g' data.csv

# Remove any rows with missing values
awk -F ',' '!/NaN/' data.csv > temp.csv
mv temp.csv data.csv

# Remove any non-numeric characters from the "age" column
sed -i 's/[^0-9]*//g' data.csv

# Convert the "gender" column to lowercase
awk -F ',' '{print $1","tolower($2)}' OFS=',' data.csv > temp.csv
mv temp.csv data.csv

# Rename the columns
sed -i '1s/gender/sex/' data.csv
sed -i '1s/age/age_years/' data.csv

Data Visualization Script:

Bash script could be used to automate the process of creating data visualizations. This script might generate charts, graphs, and other visualizations based on data input. The script could use a combination of Bash commands and data visualization libraries such as Matplotlib or Plotly.

Here’s an example of a Bash script that uses matplotlib to generate a scatter plot from a CSV file containing two columns of numerical data:

#!/bin/bash

# Define the input and output file names
input_file="data.csv"
output_file="scatter_plot.png"

# Define the plot command
plot_command="import matplotlib.pyplot as plt; \
              import pandas as pd; \
              df = pd.read_csv('$input_file'); \
              plt.scatter(df.iloc[:, 0], df.iloc[:, 1]); \
              plt.xlabel('X Label'); \
              plt.ylabel('Y Label'); \
              plt.title('Scatter Plot')"

# Run the plot command with Python
python -c "$plot_command" -o "$output_file"

Our script, reads a file data.csv by using pandas. The scatter function from matplotlib.pyplot creates a scatter plot of the two columns of data. The xlabel, ylabel, and title adds labels and a title to the plot. We run the plot_command using Python with the -c flag, which tells Python to execute the command as a single string. The -o flag is used to specify the output file name, scatter_plot.png. Automating the process of generating plots with a Bash script, data scientists can quickly generate visualizations of their data and include them in reports or presentations. The script can be modified to plot different types of data and to customize the plot settings as needed.

Model Training Script:

A Bash template training script could be used to automate the training process for machine learning templates on large datasets. The script could use the Bash commands to preprocess the data, split it into training and test sets, and then form and evaluate the model using libraries such as Scikit-Learn or TensorFlow as in the following example:

#!/bin/bash

# Define the input data file name
input_file="data.csv"

# Define the model output directory
output_dir="model"

# Define the script to load the data and train the model
train_script="import pandas as pd; \
              import tensorflow as tf; \
              data = pd.read_csv('$input_file'); \
              X = data.iloc[:, :-1]; \
              y = data.iloc[:, -1]; \
              model = tf.keras.Sequential([ \
                  tf.keras.layers.Dense(64, activation='relu'), \
                  tf.keras.layers.Dense(1, activation='sigmoid') \
              ]); \
              model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']); \
              model.fit(X, y, epochs=10); \
              model.save('$output_dir')"

# Run the script to train the model
python -c "$train_script"

In the preceding script, we read data.csv using pandas. TensorFlow.keras.Sequential is used to define a neural network model with one hidden layer. The model is compiled with binary cross entropy loss and Adam optimizer using the compile procedure and then trained for 10 epochs on the input data. I save the trained model to a given directory. At the end we run train_script using Python with the -c flag. A model is trained on input data and saved to a directory after it is trained. By automating the model training process, data scientists can quickly experiment with alternative models and hyperparameters. Depending on the type of data, the script can be adjusted to train multiple types of models.

Data Download Script:

A data download Bash script could be used to automate the process of downloading large datasets from the internet. This script could use Bash commands like curl or wget to download data from a specific website or API, and then save it to a specified directory.

Here’s an example of a Bash script for downloading a dataset from a remote URL:

#!/bin/bash

# Define the URL for the data
url="https://example.com/data.zip"

# Define the output directory
output_dir="data"

# Create the output directory if it doesn't exist
if [ ! -d "$output_dir" ]; then
  mkdir "$output_dir"
fi

# Download the data file and extract it to the output directory
wget "$url" -O "$output_dir/data.zip"
unzip "$output_dir/data.zip" -d "$output_dir"

# Delete the downloaded data file
rm "$output_dir/data.zip"

Data URL is specified here as url. A directory for the output is specified as output_dir. The script creates the output directory if it does not already exist using the mkdir command. A wget command is then executed to download the data file to the output directory. To extract the contents of the downloaded zip file to the output directory, use the unzip command. Lastly, the downloaded data file is deleted using the rm command. We can use this script to download and extract different types of data from different URLs as needed. By automating the process of downloading and extracting data with a Bash script, data scientists can quickly and easily obtain the data they need for their projects.

Pipeline Script:

Pipeline scripts are another type of script to look at. This script has the potential to automate the entire pipeline of data science, from data cleanup to modeling training to model deployment. We use a lot of Bash commands in this script for connecting different scripts and tools in a unified pipeline. Doing so allows us to quickly adjust and customize the workflow as required.

#!/bin/bash

# Define input and output directories
input_dir="input"
output_dir="output"

# Create the output directory if it doesn't exist
if [ ! -d "$output_dir" ]; then
  mkdir "$output_dir"
fi

# Step 1: Data preprocessing
echo "Preprocessing data..."
python preprocess.py "$input_dir" "$output_dir/preprocessed_data.csv"

# Step 2: Feature engineering
echo "Performing feature engineering..."
python feature_engineering.py "$output_dir/preprocessed_data.csv" "$output_dir/feature_engineered_data.csv"

# Step 3: Model training
echo "Training model..."
python train_model.py "$output_dir/feature_engineered_data.csv" "$output_dir/model.pkl"

echo "Pipeline completed successfully!"

In the previous script example, we define the input and output directories as input_dir and output_dir. The script then checks if the output directory exists and creates it if it doesn’t. The created pipeline consists of three steps:

  • data preprocessing,
  • feature engineering,
  • model training. Each step is executed using a different Python script. The output from the previous step is used as the input for each step, and the output from each step is saved in the output directory. The echo command is used to provide status updates for each step of the pipeline. When the pipeline completes successfully, a final message is printed to the console. Automating the data processing and modeling steps in a pipeline, data scientists can save time and reduce the risk of errors.

Working with Conda Environments in Bash

Conda is a well-known package management system that allows you to design and maintain isolated environments with varying dependencies. This is especially useful for data scientists who are working on multiple projects that may require various versions of Python or different packages. This prevents package conflicts and guarantees that all relevant items are installed for each project. Conda also makes it simple to share environments, which is particularly beneficial for collaborative projects or duplicating experiments. By exporting an environment file, other users can quickly generate the same environment on their own system, guaranteeing that everyone is utilizing the same set of packages and dependencies. Miniconda is a lightweight version of the popular Conda. Additionally, because Miniconda is a lightweight version of Conda, it can be faster and easier to install and manage than the full version. We use it in Bash for the following purposes:

  • Install Miniconda: You must first download and install Miniconda on your computer before using it. From the Miniconda website, you can download the proper installer and launch the installation command in Bash.

  • Create a new environment: Just like with the Conda full version, you can create a new environment using the conda create command. You can specify the name of the environment and the Python version that you want to use. You can also specify the packages that you want to install in the environment.

  • Activate the environment: Once you have created an environment, you can activate it using the conda activate” command, just like with the full version of Conda. This will activate the environment and set the appropriate environment variables.

  • Install packages: The conda install command can be used to install packages in the active environment, just like with Conda’s full version. Miniconda will resolve and carry out the automatic installation of the packages’ dependencies.

  • List environments: Just like with the Conda full version, you can use the conda env list command to list every environment you’ve created. This will display information about the environments, including their names, paths, and Python versions.

  • Environments can be exported and imported using the conda env export command, which generates a YAML file listing all the packages installed in the environment. This file can be used to duplicate the environment on another computer. Like with the full version of Conda, you can import an environment from a YAML file using the conda env create command.

  • Deactivate the environment: Just like with the full version of Conda, you can deactivate an environment once you have finished working in it by using the conda deactivate command. By doing so, the environment will be turned off and its settings for environment variables will be restored.

Here is an example of how Miniconda might be installed and activated in a bash environment for a Python project.

#!/bin/bash

# setting myenv variable name
myenv_name=myenv

# Create a new environment
conda create -n $myenv_name python=3.8

# Activate the environment
conda activate $myenv_name

# Install packages
conda install pandas matplotlib scikit-learn

# Deactivate the environment
conda deactivate

This script uses Python 3.8 to create the new environment myenv and conda activate to activate it. We then use conda install to install a few packages before using conda deactivate to turn off the environment. This script offers a fundamental illustration of how Miniconda can be used to create and manage Python environments and install packages. It can be modified to create and manage environments for particular data science projects.

Advanced Bash Topics for Data Scientists

Being proficient in Bash will help you work more efficiently and productively as a data scientist. Data scientists may find it helpful to learn more advanced topics in addition to the fundamental Bash commands. Here we’ll talk about piping, redirection, and process substitution.

Piping

Instead of saving the output to a file and then reading it in, piping lets you pass the results of one command as input to another. When working with large datasets, this can save both time and space. Consider the scenario where you want to filter a sizable CSV file based on a specific column. You could extract the column with the cut command and then filter the output with the grep command using a specific value. Instead, you can pipe the output of cut directly to grep rather than saving it to a file and then reading it in with grep:

cut -d ',' -f 1-3 data.csv | grep 'John'

With this command, the first three columns of the data.csv file are extracted. The output is piped to grep, which filters it to only display rows with the word “John” in the first column.

Redirection

Using the technique of redirection, you can direct a command’s input or output to a file or device. This can be helpful for reading data from files or saving command output to output files. Consider a scenario where you have a script that prints output to the terminal but you also want to save it to a file. The > operator can be used to direct the output to a file:

./myscript.sh > output.txt

This command runs myscript.sh and redirects the output to a file called output.txt.

You can also use the >> operator to append output to a file, rather than overwriting it:

./myscript.sh >> output.txt

This command runs myscript.sh and appends the output to the end of output.txt.

Process Substitution

By using the process substitution technique, you can use a command’s output as input for another command rather than having to save it to a file and then read it in. When working with commands that need input from a file or device, this can be helpful. Consider the scenario where you have a script that reads data from a file but doesn’t want to save the output of the prior command to a file. To pass the results of the previous command as input to your script, use process substitution:

./myscript.sh <(ls -1 *.txt)

In order to list all files with the.txt extension, this command uses the ls command. The output is then passed as input to myscript.sh.

Best Practices for Bash Scripting

Data scientists can use bash scripting as a potent tool to automate processes, manage environments, and improve workflows. However, it’s crucial to adhere to a few best practices to guarantee that your scripts are trustworthy and easy to maintain. To help you write Bash scripts, consider the following advices:

  • Use comments: to describe each section’s function and motivation in your code by adding comments. This will make it simpler for you, other people, and later revisions of the script.
  • Use variables: to increase the flexibility and reuse of your scripts. For instance, you could create a variable that stores the path to a file, requiring only a single update in the event that the path changes.
  • Use functions: to divide your code into reusable, modular components. Your code may become easier to read and maintain.
  • Verify for errors: at all times, and handle them politely. Depending on the particular situation, this can be accomplished using conditional statements, try-catch blocks, or other techniques. Before using your scripts in production, make sure to thoroughly test them. Testing for various scenarios, edge cases, and unexpected inputs may fall under this category.
  • Using version control: will help you keep track of script changes and collaborate with others. In case something goes wrong, you can also use this to roll back to an earlier version.
  • Ensure simplicity: Avoid writing scripts with numerous nested loops or conditional statements. This can make the code harder to understand and maintain.

You can create dependable and maintainable Bash scripts by adhering to these best practices, which will increase your productivity and efficiency as a data scientist.

Conclusion

In conclusion, Bash is a crucial tool for data scientists because it offers strong functionality for navigating file systems, automating tasks, and managing software environments. Data scientists can speed up their processes and save time by becoming proficient in the fundamental Bash commands as well as the creation and use of Bash scripts. Additionally, as a data scientist, being knowledgeable in advanced Bash concepts like piping, redirection, and process substitution can greatly improve your productivity and streamline your workflow. Nevertheless, it’s crucial to adhere to best practices for Bash scripting, which include using version control, crafting understandable, readable code, and thoroughly testing and debugging scripts. Data scientists can improve their productivity and effectiveness in their work by adding Bash to their toolkit.