sessionInfo()
Biostat 203B Homework 2
Due Feb 9 @ 11:59PM
Display machine information for reproducibility:
Load necessary libraries (you can add more as needed).
library(arrow)
library(data.table)
library(memuse)
library(pryr)
library(R.utils)
library(tidyverse)
Display memory information of your computer
::Sys.meminfo() memuse
In this exercise, we explore various tools for ingesting the MIMIC-IV data introduced in homework 1.
Display the contents of MIMIC hosp
and icu
data folders:
ls -l ~/mimic/hosp/
ls -l ~/mimic/icu/
Q1. read.csv
(base R) vs read_csv
(tidyverse) vs fread
(data.table)
Q1.1 Speed, memory, and data types
There are quite a few utilities in R for reading plain text data files. Let us test the speed of reading a moderate sized compressed csv file, admissions.csv.gz
, by three functions: read.csv
in base R, read_csv
in tidyverse, and fread
in the data.table package.
Which function is fastest? Is there difference in the (default) parsed data types? How much memory does each resultant dataframe or tibble use? (Hint: system.time
measures run times; pryr::object_size
measures memory usage.)
Q1.2 User-supplied data types
Re-ingest admissions.csv.gz
by indicating appropriate column data types in read_csv
. Does the run time change? How much memory does the result tibble use? (Hint: col_types
argument in read_csv
.)
Q2. Ingest big data files
Let us focus on a bigger file, labevents.csv.gz
, which is about 125x bigger than admissions.csv.gz
.
ls -l ~/mimic/hosp/labevents.csv.gz
Display the first 10 lines of this file.
zcat < ~/mimic/hosp/labevents.csv.gz | head -10
Q2.1 Ingest labevents.csv.gz
by read_csv
Try to ingest labevents.csv.gz
using read_csv
. What happens? If it takes more than 5 minutes on your computer, then abort the program and report your findings.
Q2.2 Ingest selected columns of labevents.csv.gz
by read_csv
Try to ingest only columns subject_id
, itemid
, charttime
, and valuenum
in labevents.csv.gz
using read_csv
. Does this solve the ingestion issue? (Hint: col_select
argument in read_csv
.)
Q2.3 Ingest subset of labevents.csv.gz
Our first strategy to handle this big data file is to make a subset of the labevents
data. Read the MIMIC documentation for the content in data file labevents.csv
.
In later exercises, we will only be interested in the following lab items: creatinine (50912), potassium (50971), sodium (50983), chloride (50902), bicarbonate (50882), hematocrit (51221), white blood cell count (51301), and glucose (50931) and the following columns: subject_id
, itemid
, charttime
, valuenum
. Write a Bash command to extract these columns and rows from labevents.csv.gz
and save the result to a new file labevents_filtered.csv.gz
in the current working directory. (Hint: use zcat <
to pipe the output of labevents.csv.gz
to awk
and then to gzip
to compress the output. To save render time, put #| eval: false
at the beginning of this code chunk.)
Display the first 10 lines of the new file labevents_filtered.csv.gz
. How many lines are in this new file? How long does it take read_csv
to ingest labevents_filtered.csv.gz
?
Q2.4 Ingest labevents.csv
by Apache Arrow
Our second strategy is to use Apache Arrow for larger-than-memory data analytics. Unfortunately Arrow does not work with gz files directly. First decompress labevents.csv.gz
to labevents.csv
and put it in the current working directory. To save render time, put #| eval: false
at the beginning of this code chunk.
Then use arrow::open_dataset
to ingest labevents.csv
, select columns, and filter itemid
as in Q2.3. How long does the ingest+select+filter process take? Display the number of rows and the first 10 rows of the result tibble, and make sure they match those in Q2.3. (Hint: use dplyr
verbs for selecting columns and filtering rows.)
Write a few sentences to explain what is Apache Arrow. Imagine you want to explain it to a layman in an elevator.
Q2.5 Compress labevents.csv
to Parquet format and ingest/select/filter
Re-write the csv file labevents.csv
in the binary Parquet format (Hint: arrow::write_dataset
.) How large is the Parquet file(s)? How long does the ingest+select+filter process of the Parquet file(s) take? Display the number of rows and the first 10 rows of the result tibble and make sure they match those in Q2.3. (Hint: use dplyr
verbs for selecting columns and filtering rows.)
Write a few sentences to explain what is the Parquet format. Imagine you want to explain it to a layman in an elevator.
Q2.6 DuckDB
Ingest the Parquet file, convert it to a DuckDB table by arrow::to_duckdb
, select columns, and filter rows as in Q2.5. How long does the ingest+convert+select+filter process take? Display the number of rows and the first 10 rows of the result tibble and make sure they match those in Q2.3. (Hint: use dplyr
verbs for selecting columns and filtering rows.)
Write a few sentences to explain what is DuckDB. Imagine you want to explain it to a layman in an elevator.
Q3. Ingest and filter chartevents.csv.gz
chartevents.csv.gz
contains all the charted data available for a patient. During their ICU stay, the primary repository of a patient’s information is their electronic chart. The itemid
variable indicates a single measurement type in the database. The value
variable is the value measured for itemid
. The first 10 lines of chartevents.csv.gz
are
zcat < ~/mimic/icu/chartevents.csv.gz | head -10
d_items.csv.gz
is the dictionary for the itemid
in chartevents.csv.gz
.
zcat < ~/mimic/icu/d_items.csv.gz | head -10
In later exercises, we are interested in the vitals for ICU patients: heart rate (220045), mean non-invasive blood pressure (220181), systolic non-invasive blood pressure (220179), body temperature in Fahrenheit (223761), and respiratory rate (220210). Retrieve a subset of chartevents.csv.gz
only containing these items, using the favorite method you learnt in Q2.
Document the steps and show code. Display the number of rows and the first 10 rows of the result tibble.