forked from laser-institute/laser-orientation
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathorientation-case-study-key-python.qmd
712 lines (416 loc) · 39.3 KB
/
orientation-case-study-key-python.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
---
title: "A Coding Case Study with Quarto"
subtitle: "LASER Orientation Module"
author: "LASER Institute"
date: today
format:
html:
toc: true
toc-depth: 4
toc-location: right
theme:
light: simplex
dark: cyborg
editor: visual
bibliography: lit/references.bib
jupyter: python3
---
## 0. INTRODUCTION
![](img/LASER_Hx.png){width="40%"}
Welcome to your first LASER Case Study! The case study activities included in each module demonstrate how key Learning Analytics (LA) techniques featured in exemplary STEM education research studies can be implemented with R or Python. Case studies also provide a holistic setting to explore important foundational topics integral to Learning Analytics such as reproducible research, use of APIs, and ethical use of educational data.
This orientation case study will also introduce you to [Quarto](https://quarto.org), which is heavily integrated into each LASER Module. You may have used Quarto before - or you may not have! Either is fine as this task will be designed with the assumption that you have not used Quarto before.
### How to use this Quarto document
What you are working in now is an **Q**uarto **m**ark**d**own file as indicated by the .**qmd** file name extension. Quarto documents are fully reproducible and use a productive notebook interface to combine formatted text and "chunks" of code to produce a range of [static and dynamic output formats](https://quarto.org/docs/guide/) including: HTML, PDF, Word, HTML5 slides, Tufte-style handouts, books, dashboards, shiny applications, scientific articles, websites, and more.
::: callout-tip
Quarto docs can include specially formatted **callout boxes** like this one to draw special attention to provide notes, tips, cautions, warnings, and important information.
**Pro tip**: Quarto documents also have a handy Outline feature that allow you to easily navigate the entire document. If the outline is not currently visible, click the Outline button located on the right of the toolbar at the top of this document.
:::
#### Source vs. Visual Editor
Following best practices for reproducible research [@gandrud2021], Quarto files store information in plain text [markdown](https://bookdown.org/yihui/rmarkdown/markdown-syntax.html) syntax. You are currently viewing this Quarto document using the visual editor, The visual editor is set as the default view in the [Quarto YAML header](https://quarto.org/docs/get-started/hello/rstudio.html#yaml-header) at the top of this document. Basically, a [YAML header](https://monashdatafluency.github.io/r-rep-res/yaml-header.html#) is:
> a short blob of text that... not only dictates the final file format, but a style and feel for our final document.
The visual editor allows you to view formatted headers, text and code chunks and is a bit more "human readable" than markdown syntax but there will be many occasions where you will want to take a look at the plain text source code underlying this document. This can be viewed at any point by switching to source mode for editing. You can toggle back and forth between these two modes by clicking on **Source** and **Visual** in the editor toolbar.
::: callout-note
You may have noticed a special kind of link in the text above. Specifically, a link citing Reproducible Research with R and R Studio by Chris Gandrud. The YAML header includes a bibliography option and points to our `reference.bib` file in the `lit` folder of this project, which produces a nice tooltip for linked references and a bibliography when our doc is [rendered](https://quarto.org/docs/get-started/hello/rstudio.html#rendering) and [published](https://quarto.org/docs/get-started/authoring/rstudio.html#publishing). Click the following link to learn more about [citations in Quarto](https://quarto.org/docs/authoring/citations.html).
:::
#### 👉 Your Turn ⤵
LASER case studies include many interactive elements in which you are asked to perform an action, answer some questions, or write some code. These are indicated by the **👉 Your Turn** **⤵** header. Now it's your turn to do something.
Take a look at the markdown syntax used to create this document by viewing with the source editor. To do so, click the "Source" button in the toolbar at the top of this file. After you've had a look, click back to the visual editor to continue.
![](img/source-view.png){width="100%"}
Great job! Let's continue!
#### Code "Chunks"
In addition to including formatted text hyperlinks, and embedded images like above, Quarto documents can also include a specially formatted text box called a "[code chunk](https://quarto.org/docs/get-started/hello/rstudio.html#code-chunks)." These chunks allows you to run code from multiple languages including R, Python, and SQL. For example, the code chunk below is intended to run Python code as specified by "python" inside the curly brackets `{}`. It also contains a contains some code "comments" as indicted by the \# hashtags and several lines of python code. You may have also noticed a set of buttons in the upper right corner of the code chunk which are used to execute the code.
#### 👉 Your Turn ⤵
Click the green arrow ![](https://d33wubrfki0l68.cloudfront.net/18153fb9953057ee5cff086122bd26f9cee8fe93/3aba9/images/notebook-run-chunk.png)icon on the right side of the code chunk to run the Python code and view the image file name `laser-cycle.png` stored in the `img` folder in your files pane. Quarto will execute the code and its output and any related messages are displayed below the chunk.
```{python}
# Import the pyplot and image modules from the matplotlib library
import matplotlib.pyplot as plt
# Read and display an image from file
plt.imshow(plt.imread('img/laser-cycle.png'))
plt.axis('off') # Hide axes
plt.show()
```
Nice work! For this case study, **don't stress too much about understanding the code**. We'll spend a lot of time doing that in the other modules. For now, take a look at the image displayed and answer the question that follows by typing your response directly in this document.
#### ❓Question
In LASER case studies, you will often see as part of "Your Turns" a ❓ icon that indicates you are being promoted to answer a question. Type your response to the following question by deleting "YOUR RESPONSE HERE" and adding your own response:
What do you think this image is intended to illustrate?
- YOUR RESPONSE HERE
### The Data-Intensive Research Workflow
The diagram shown above illustrates a Learning Analytics framework called the Data-Intensive Research workflow and comes from the excellent book, Learning Analytics Goes to School [@krumm2018]*.* You can check that out later, but don't feel any need to dive deep into it for now - we spend more time unpacking this framework in our [Learning Analytics Workflow Modules](https://laser-institute.github.io/laser-website/curriculum-la-workflow.html); just know that this case study and all of the case studies in our [LASER curriculum modules](https://laser-institute.github.io/laser-website/curriculum-design.html#modules-topics) are organized around the five main components of this workflow.
In this introductory coding case study, we'll focus on the following tasks specific to each component of the workflow:
1. **Prepare**. Understand the research context, software packages, and data collected.
2. **Wrangle**. Select and filter variables and "wrangle" them in a tabular (think spreadsheet!) format.
3. **Explore**. Create some basic summary tables and plots to understand our data better.
4. **Model**. Run a basic model - specifically, a simple regression model.
5. **Communicate**. Create a reproducible report of your work that you can share with others.
Now, let's get started!
## 1. PREPARE
First and foremost, data-intensive research involves defining and refining a research question and developing an understanding of where your data comes from [@krumm2018]. This part of the process also involves setting up a reproducible research environment so your work can be understood and replicated by other researchers [@gandrud2021]. For now, we'll focus on just a few parts of this process, diving in much more deeply into these components in later learning modules.
### Research Question
In this case study, we'll be working with data come from an unpublished research study by LASER team member, [Josh Rosenberg](https://joshuamrosenberg.com), which utilized a number of different data sources to understand high school students' motivation within the context of online courses.
These data sets and related research questions are explored in much greater detail in other modules, but for the purpose of this case study, our analysis will be driven by the following research question:
*Is there a relationship between the time students spend on a course (as measured through their learning management system) and their final course grade?*
### Projects & Packages 📦
As highlighted in [Chapter 6 of Data Science in Education Using R](https://datascienceineducation.com/c06.html) [@estrellado2020e], one of the first steps of every research workflow should be to set up a "Project" within RStudio.
> A **Project** is the home for all of the files, images, reports, and code that are used in any given project.
We are working in Posit Cloud with an R project [cloned from GitHub](https://github.com/laser-institute/laser-orientation), so a project has already been set up for you as indicated by the `.Rproj` file in the main directory.
#### 👉 Your Turn ⤵
Locate the Files tab lower right hand window pane and see if you can find the file named `laser-orientation.Rproj`.
Since a project already set up for us, we will instead focus on loading the required **packages** we'll need for analysis.
> Packages, sometimes referred to as libraries, are shareable collections of R code that can contain functions, data, and/or documentation and extend the functionality of R.
#### pandas 📦
![](img/pandas.svg){width="30%"}
One package that we'll be using extensively is {pandas}. [Pandas](https://pandas.pydata.org) [@mckinney-proc-scipy-2010] is a powerful and flexible open source data analysis and wrangling tool for Python that is used widely by the data science community.
#### NumPy 📦
![](img/numpy.png){width="20%"}
[NumPy](https://numpy.org) is a fundamental package for scientific computing with Python and includes a collection of mathematical algorithms and convenience functions. NumPy offers comprehensive mathematical functions, random number generators, linear algebra routines, Fourier transforms, and more.
#### Pyplot 📦
![](img/matplotlib.png){width="40%"}
Pyplot is a module in the {matplotlib) package, a comprehensive library for creating static, animated, and interactive visualizations in Python. **`pyplot`** provides a MATLAB-like interface for making plots and is particularly suited for interactive plotting and simple cases of programmatic plot generation.
#### Statsmodels 📦
![](img/statsmodels.svg){width="40%"}
The [statsmodels](https://www.statsmodels.org/stable/about.html#about-statsmodels) package provides classes and functions for the estimation of many different statistical models, as well as for conducting statistical tests, and statistical data exploration. An extensive list of result statistics are available for each estimator. The results are tested against existing statistical packages to ensure that they are correct.
#### scikit-learn 📦
The [scikit-learn](https://scikit-learn.org/) package is an open source machine learning library that supports supervised and unsupervised learning. It also provides various tools for model fitting, data preprocessing, model selection, model evaluation, and many other utilities.
#### 👉 Your Turn ⤵
Click the arrow to execute the code in the cell below to load the required packages and functions for this case study.
```{python}
import pandas as pd # for data wrangling
import numpy as np # for descriptive statistics
import matplotlib.pyplot as plt # for data visualization
import statsmodels.api as sm
from sklearn.linear_model import LinearRegression # for data modeling
```
### Loading (or reading in) data
The data we'll explore in this case study were originally collected for a research study, which utilized a number of different data sources to understand students' course-related motivation. These courses were designed and taught by instructors through a state-wide online course provider designed to supplement -- but not replace -- students' enrollment in their local school.
The data used in this case study has already been "wrangled" quite a bit, but the original datasets included:
1. A self-report survey assessing three aspects of students' motivation
2. Log-trace data, such as data output from the learning management system (LMS)
3. Discussion board data
4. Academic achievement data
To know more, see Chapter 7 of [*Data Science in Education Using R*](https://colab.research.google.com/corgiredirector?site=https%3A%2F%2Fdatascienceineducation.com%2Fc07.html%23data-sources) [@estrellado2020e].
#### 👉 Your Turn ⤵
Next, we'll load our data - specifically, a CSV (comma separated value) text file, the kind that you can export from Microsoft Excel or Google Sheets - into pandas, using the `pd.read_csv()` function in the next chunk.
```{python}
#read sci-online-classes.csv to sci_data and display the output
sci_data = pd.read_csv("data/sci-online-classes.csv")
```
Nice work! You should now see a new data "object" named `sci_data` saved in your Environment pane. Try clicking on it and see what happens!
::: callout-important
It's important to note that by manipulating data with pandas we do **not** change the original file. Instead, the data is stored in memory and can be viewed in our **Environment** pane, and can later be exported and saved as a new file is desired.
:::
#### Viewing and inspecting data
Now let's learn another way to inspect our data.
#### **👉 Your Turn ⤵**
Run the next chunk and look at the results of the data frame you "assigned" to the `sci_data` object in the previous code-chunk:
```{python}
sci_data
```
::: callout-tip
You can also enlarge this output by clicking the "Show in New Window" button located in the top right corner of the output.
:::
#### ❓Question
What do you notice about this data set? What do you wonder? Add one or two observations in the space below:
- YOUR RESPONSE HERE
#### Data Types
Now, let's **examine** our data a little more more systematically. The first step in getting to know your data is to discover the different data types it contains.
There are two general types of data:
1. **Categorical** data represent categories or groups that are distinct and separable. It usually consists of names, labels, or attributes and is represented by words or symbols.
2. **Numerical** data represents qualities that can be measured and represented as numbers.
#### 👉 Your Turn ⤵
One way to explore the data types is by using `info()` function. Complete the following code to take a look at the data types for each column in our `sci_data` data frame.
**Hint**: Type the name of the function after the name of the dataset, using `.` before and `()` after it.
```{python}
sci_data.info()
```
Nice work!!
#### ❓Question
Which of the columns in our dataset contain categorical and which contain numerical data? Name a few.
- YOUR RESPONSE HERE
Which data types do you see? Which ones are numerical and which are categorical? Name a few.
- YOUR RESPONSE HERE
If you look at "Grade_category", you will notice that all values are NaNs or missing values which means we do not have any information about this variable (or parameter).
What other columns do you think have missing values? Why do you think so?
- YOUR RESPONSE HERE
## 2. WRANGLE
By wrangle, we refer to the process of cleaning and processing data, and, in some cases, merging (or joining) data from multiple sources. Often, this part of the process is very (surprisingly) time-intensive! Wrangling your data into shape can itself be an important accomplishment! And documenting your code using Python scripts or Quarto files will save yourself and others a great deal of time wrangling data in the future!
### Selecting variables
Recall from our Prepare section that we are interested the relationship between the time students spend on a course and their final course grade. Let's practice selecting these variables!
#### 👉 Your Turn ⤵
Run the following code chunk using `sci_data[[]]` and the names of the columns:
- `FinalGradeCEMS` (i.e., students' final grades on a 0-100 point scale)
- `TimeSpent` (i.e., the number of minutes they spent in the course's learning management system)
```{python}
sci_data[['FinalGradeCEMS','TimeSpent']]
```
Notice how the number of columns (variables) is now different!
::: callout-note
It's important to note that since we haven't "assigned" this filtered data frame to a new object using the `<-` assignment operator, the number of variables in the `sci_data` data frame in our environment is still 30, not 2.
:::
Let's *include one additional variable* that you think might be a predictor of students' final course grade or useful in addressing our research question.
First, we need to figure out what variables exist in our dataset (or be reminded of this - it's very common in Python to be continually checking and inspecting your data)!
Recall that you can use a function named `info()` to do this.
```{python}
sci_data.info()
```
#### 👉 Your Turn ⤵
In the code chunk below, add a new variable, being careful to type the new variable name as it appears in the data. We've added some code to get you started. Consider how the names of the other variables are separated as you think about how to add an additional variable to this code.
```{python}
sci_data[['FinalGradeCEMS','TimeSpent', 'total_points_earned']]
```
Once added, the output should be different than in the code above - there should now be an additional variable included in the print-out.
### Cleaning data
We have already seen that there are missing values in our target columns. There is another way to do that by selecting the column and using the `isnull()` function and adding `sum()` function in the end to find the number of those values.
**Hint:** you can use one function and then another function like `data.function_1().function_2()`
```{python}
sci_data.isnull().sum()
```
### Handling Missing Values
There are several conventional ways to deal with the missing values.
1. We can *drop* those values and not use the entire row in which this element is missing.
2. We can substitute missing values with the *column mean* if the variance within a row is not very big.
#### 👉 Your Turn ⤵
Below are the code chunks to execute both ways. Choose one you think is more appropriate as executing one excludes the use of the other and you will add changes to the dataset.
#### Drop missing values
The following code performs data cleaning and provides an overview of the cleaned dataset. Specifically, the `dropna()` function removes any rows from the DataFrame `sci_data` that contain missing values (NaN) in the columns `FinalGradeCEMS` and `TimeSpent`.
```{python}
# Remove rows with missing values in 'FinalGradeCEMS' and 'TimeSpent' columns
sci_data = sci_data.dropna(subset=['FinalGradeCEMS', 'TimeSpent'])
# Display a summary of the DataFrame
sci_data.info()
```
#### Substitute with column means
The following code snippet addresses missing values in the dataset by replacing them with the mean values of their respective columns. This approach ensures that the dataset remains comprehensive and suitable for analysis, while maintaining the overall statistical properties.
1. **Calculating Mean Values:** The `mean()` function calculates the mean (average) values for the FinalGradeCEMS and TimeSpent columns.
2. **Filling Missing Values:** The `fillna()` function replaces any missing values (NaN) in these columns with their corresponding mean values.
```{python}
# Calculate the mean value for 'FinalGradeCEMS' column
mean_value_grade = sci_data['FinalGradeCEMS'].mean()
# Calculate the mean value for 'TimeSpent' column
mean_value_time = sci_data['TimeSpent'].mean()
# Replace missing values in 'FinalGradeCEMS' column with the mean value
sci_data['FinalGradeCEMS'].fillna(value=mean_value_grade, inplace=True)
# Replace missing values in 'TimeSpent' column with the mean value
sci_data['TimeSpent'].fillna(value=mean_value_time, inplace=True)
```
### Filtering variables
Finally, let's explore filtering variables by certain values using single brackets `[ ]`.
#### 👉 Your Turn ⤵
Check out and run the next chunk of code, imagining that we wish to filter our data to view only the rows associated with students who earned a final grade (as a percentage) of 70% - or higher and the `TimeSpent` associated with it.
```{python}
sci_data['TimeSpent'][sci_data['FinalGradeCEMS']>70]
```
#### ❓Question
Roughly much time do you think you need to spend to get the grade higher than 70%? Is there a consistent pattern?
- YOUR RESPONSE HERE
## 3. EXPLORE
Exploratory data analysis, or exploring your data, involves processes of *describing* your data (such as by calculating the means and standard deviations of numeric variables, or counting the frequency of categorical variables) and, often, visualizing your data. As we'll learn in later labs, the explore phase can also involve the process of "feature engineering," or creating new variables within a dataset [@krumm2018].
In this section, we'll quickly pull together some basic stats and introduce you to a basic data visualization.
### Summary Statistics
Let's repurpose what we learned from our wrangle section to select just a few variables and quickly gather some descriptive statistics to see where the data is centered, its values to identify trends by using `describe()` function.
```{python}
sci_data.describe()
```
#### **👉 Your Turn** **⤵**
❓ What do you notice about this dataset? Which columns are most important for our research question? What do you wonder?
- YOUR RESPONSE HERE
### Data Visualization
Data visualization is an extremely common practice in Learning Analytics, especially in the use of data dashboards. Data visualization involves graphically representing one or more variables with the goal of discovering patterns in data. These patterns may help us to answer research questions or generate new questions about our data, to discover relationships between and among variables, and to create or select features for data modeling.
#### The Graphing Workflow
At it's core, you can create some very simple but attractive graphs with just a couple lines of code. Matplotlib follows the common workflow for making graphs. To make a graph, you simply:
1. Start the graph with `plt` and include type of graph \`.hist()'in our case, and add the data as an argument;
2. "Add" elements to the graph using the `bins =`, or changing the color;
3. Select variables to graph on each axis with the `xlabel()` argument.
Let's give it a try by creating a simple histogram of our `FinalGradeCEMS` variable. The code below creates a histogram, or a distribution of the values, in this case for students' final grades. Go ahead and run it:
```{python}
# Clear the previous figure to prevent overlapping in Quarto
plt.clf()
# Create a histogram for the 'FinalGradeCEMS' column in the sci_data DataFrame
plt.hist(sci_data['FinalGradeCEMS'], bins=30, color="skyblue")
# Label the x-axis as 'FinalGradeCEMS'
plt.xlabel('FinalGradeCEMS')
# Display the histogram
plt.show()
```
#### **👉 Your Turn** **⤵**
Now use the code chunk below to visualize the distribution of another variable in the data, specifically `TimeSpent`. You can do so by swapping out the variable `FinalGradeCEMS` with our new variable. Also, change the color to one of your choosing; consider this list of valid color names here: [https://matplotlib.org/stable/gallery/color/named_colors.html](https://colab.research.google.com/corgiredirector?site=https%3A%2F%2Fmatplotlib.org%2Fstable%2Fgallery%2Fcolor%2Fnamed_colors.html)
**Tip:** There is no shame in copying and pasting code from above. Remember, reproducible research is also intended to help you save time!
```{python}
# Clear the previous figure to prevent overlapping in Quarto
plt.clf()
# Create a histogram for the 'TimeSpent' column in the sci_data DataFrame
plt.hist(sci_data['TimeSpent'], bins=30, color="green")
# Label the x-axis as 'TimeSpent'
plt.xlabel('TimeSpent')
# Display the histogram
plt.show()
```
#### Scatterplots
Finally, let's create a scatter plot for the relationship between these two variables. Scatterplots are most useful for displaying the relationship between two continuous variables. You can change type of graph by using the `scatter()` function. You can also choose the size of the marker and color.
#### **👉 Your Turn** **⤵**
Complete the code chunk below to create a simple scatterplot with `TimeSpent` on the x axis and `FinalGradeCEMS` on the y axis.
```{python}
# Clear the previous figure to prevent overlapping in Quarto
plt.clf()
# Create a scatter plot with 'TimeSpent' on the x-axis and 'FinalGradeCEMS' on the y-axis
plt.scatter(x=sci_data['TimeSpent'], y=sci_data['FinalGradeCEMS'], marker=".", color='#88c999')
# Label the x-axis as 'TimeSpent'
plt.xlabel('TimeSpent')
# Label the y-axis as 'FinalGradeCEMS'
plt.ylabel('FinalGradeCEMS')
# Display the scatter plot
plt.show()
```
Well done! As you can see, there appears to be a positive relationship between the time students spend in the online course and their final grade!
## 4. MODEL
"Model" is one of those terms that has many different meanings. For our purpose, we refer to the process of simplifying and summarizing our data as modeling. Thus, models can take many forms; calculating means represents a legitimate form of modeling data, as does estimating more complex models, including linear regressions, and models and algorithms associated with machine learning tasks. For now, we'll run a base linear regression model to further examine the relationship between `TimeSpent` and `FinalGradeCEMS`.
We'll dive much deeper into modeling in subsequent case studies, but for now let's see if there is a statistically significant relationship between students' final grades, `FinaGradeCEMS`, and the `TimeSpent` on the course.
### An Inferential Model
An inferential statistical model is used to understand the relationships between variables and make inferences about the population from a sample. It aims to identify the underlying mechanisms and determine the significance of these relationships.
Key characteristics of an inferential model include:
- **Focus on Understanding:** These models aim to understand the causal or correlational relationships between variables.
- **Hypothesis Testing:** They often involve hypothesis testing to determine if observed patterns are statistically significant.
- **Parameter Estimates:** Inferential models provide estimates of parameters (such as coefficients) that describe the relationship between variables, along with confidence intervals and p-values to assess their significance.
- **Generalization:** The goal is to generalize findings from the sample to the larger population.
#### **👉 Your Turn** **⤵**
Let's use {statsmodels} package run a basic linear regression model to further examine the relationship between `TimeSpent` and `FinalGradeCEMS`:
```{python}
# Define the independent variable (predictor) and the dependent variable (response)
X = sci_data['TimeSpent']
y = sci_data['FinalGradeCEMS']
# Add a constant to the independent variable (this adds the intercept term to the model)
X = sm.add_constant(X)
# Fit the linear regression model
model = sm.OLS(y, X).fit()
# Print the summary of the model
print(model.summary())
```
It looks like `TimeSpent` in the online course is indeed positively associated with a higher final grade! That is, students who spent more time in the LMS also earned higher grades. However, before we get too excited, there are likely other factors influencing student performance that are not captured by our model as study time only accounts for a small portion of why students get different grades. There are clearly other variables that likely play a role in determining final grades.
Don't worry too much for now if interpreting model outputs is relatively new for you, or if it's been a while. We'll be working with models in greater depth in our other LASER modules.
### A Predictive Model
A predictive model is designed to make accurate predictions about future or unseen data. The primary goal is to maximize the accuracy of predictions rather than understanding the underlying relationships.
Key characteristics of an inferential model include:
- **Focus on Prediction:** These models aim to accurately predict the outcome variable for new data points.
- **Model Performance:** The effectiveness of predictive models is evaluated based on metrics like R-squared, mean squared error, or other prediction accuracy measures.
- **Complex Algorithms:** Predictive modeling often uses more complex algorithms (e.g., machine learning techniques) that might not be easily interpretable but offer higher predictive power.
- **Validation:** Predictive models rely heavily on validation techniques like cross-validation to ensure that the model performs well on unseen data.
Before we can run out model, we'll need to do some transformations of our two parameters. In our case it is linear regression. We encode the data from the two columns `FinaGradeCEMS` and `TimeSpent` into the format our scikit-learn linear regression model could understand. We then train (or fit) or model to the data we selected to find a line of best fit.
#### **👉 Your Turn** **⤵**
Run the following code chunk to fit a linear regression model to predict `FinalGradeCEMS` (final grades) based on `TimeSpent` (time in LMS) and visualize the data points and the fitted regression line on a scatter plot:
```{python}
#dependent variable is what you want to predict - y axis; independent(exploratory) variable
X = np.array(sci_data['TimeSpent']).reshape(-1, 1)
y = np.array(sci_data['FinalGradeCEMS']).reshape(-1, 1)
reg = LinearRegression().fit(X, y)
plt.scatter(X, y, marker=".", color = '#88c999')
plt.plot(X, reg.predict(X),color='hotpink')
plt.xlabel('TimeSpent')
plt.ylabel('FinalGradeCEMS')
plt.show()
```
Note that on the y-axis we put a dependent variable, the one we want to predict and x-axis is for and independent, or exploratory variable.
We can now use our model to predict a student's grade depending on how much time (in minutes) they spent in the course:
```{python}
# Create a new DataFrame with a column 'TimeSpent' in minutes
new_data = pd.DataFrame({'TimeSpent': [1000, 1500, 2000]})
# Extracting the 'TimeSpent' values from 'new_data'
X_new = np.array(new_data['TimeSpent']).reshape(-1, 1)
# Use the predict method to obtain predictions
predicted_grades = reg.predict(X_new)
# Print the predicted grades
print(predicted_grades)
```
Hmm... Assuming our model is accurate (a big assumption!) even at 2000 minutes the best a student might hope for is a C+!
#### **👉 Your Turn** **⤵**
Change the amount of time a student is spends in the course and see how the predicted grade changes.
```{python}
print(reg.predict([[5000]]))
```
#### ❓Question
How much time would a student need to spend in the course to get an A?
- YOUR RESPONSE HERE
::: callout-important
It's important to note that these models are just used for illustrative examples of typical modeling approaches used in learning analytics and should be taken with a grain of salt. In our predictive model, for example, we haven't attended to the accuracy of our model or how well it performs well on unseen data.
:::
## 5. COMMUNICATE
The final step in the workflow/process is sharing the results of your analysis with wider audience. Krumm et al. @krumm2018 have outlined the following 3-step process for communicating with education stakeholders findings from an analysis:
1. **Select.** Communicating what one has learned involves selecting among those analyses that are most important and most useful to an intended audience, as well as selecting a form for displaying that information, such as a graph or table in static or interactive form, i.e. a "data product."
2. **Polish**. After creating initial versions of data products, research teams often spend time refining or polishing them, by adding or editing titles, labels, and notations and by working with colors and shapes to highlight key points.
3. **Narrate.** Writing a narrative to accompany the data products involves, at a minimum, pairing a data product with its related research question, describing how best to interpret the data product, and explaining the ways in which the data product helps answer the research question and might be used to inform new analyses or a "change idea" for improving student learning.
In later modules, you will have an opportunity to create a "data product" designed to illustrate some insights gained from your analysis and ideally highlight an action step or change idea that can be used to improve learning or the contexts in which learning occurs.
For example, imagine that as part of a grant to improve student performance in online courses, your are working with the instructors who taught for this online course provider. As part of some early exploratory work to identify factors influencing performance, you are interested in sharing some of your findings about hours logged in the LMS and their final grades. One way we might communicate these findings is through a simple [data dashboard](https://sbkellogg.quarto.pub/final-grades-and-hours-logged/#plots) like the one shown below. Dashboards are a very common reporting tool and their use, for better or worse, has become ubiquitous in the field of Learning Analytics.
![](img/data-dashboard.png){width="100%"}
### Render Document
For now, we will wrap up this case study by converting your work to an HTML file that can be published and used to communicate your learning and demonstrate some of your new R skills. To do so, you will need to "render" your document. Rendering a document does two important things, namely when you render a document it:
1. checks through all your code for any errors; and,
2. creates a file in your project directory that you can use to share you work.
#### 👉 Your Turn ⤵
Now that you've finished your first case study, let's render this document by clicking the ![](img/render.png){width="2%"} Render button in the toolbar at that the top of this file. Rendering will covert this Quarto document to a HTML web page as specified in our YAML header. Web pages are just one of [the many publishing formats you can create with Quarto](https://quarto.org/docs/output-formats/all-formats.html) documents.
If the files rendered correctly, you should now see a new file named `orientation-case-study-R.html` in the Files tab located in the bottom right corner of R Studio. If so, congratulations, you just completed the getting started activity! You're now ready for the unit Case Studies that we will complete during the third week of each unit.
::: callout-important
If you encounter errors when you try to render, first check the case study answer key located in the files pane and has the suggested code for the Your Turns. If you are still having difficulties, try copying and pasting the error into Google or ChatGPT to see if you can resolve the issue. Finally, contact your instructor to debug the code together if you're still having issues.
:::
### Publish File
There are a wide variety of ways to publish documents, presentations, and websites created using Quarto. Since content rendered with Quarto uses standard formats (HTML, PDFs, MS Word, etc.) it can be published anywhere. Additionally, there is a `quarto publish` command available for easy publishing to various popular services such as [Quarto Pub](#0), [Posit Cloud](#0), [RPubs](#0) , [GitHub Pages](#0), or [other services](#0).
#### 👉 Your Turn ⤵
Choose of of the following methods described below for publishing your completed case study.
#### Publishing with Quarto Pub
Quarto Pub is a free publishing service for content created with Quarto. Quarto Pub is ideal for blogs, course or project websites, books, reports, presentations, and personal hobby sites.
It’s important to note that all documents and sites published to Quarto Pub are **publicly visible**. You should only publish content you wish to share publicly.
To publish to Quarto Pub, you will use the `quarto publish` command to publish content rendered on your local machine or via Posit Cloud.
Before attempting your first publish, be sure that you have created a free [Quarto Pub](https://quartopub.com/) account.
The `quarto publish` command provides a very straightforward way to publish documents to Quarto Pub.
For example, here is the Terminal command to publish a generic Quarto `document.qmd` to each of this service:
``` {.bash filename="Terminal"}
quarto publish quarto-pub document.qmd
```
You can access your the terminal from directly **Terminal Pane** in the lower left corner as shown below:
![](img/terminal.png){width="100%"}
The actual command you will enter into your terminal to publish your orientation case study is:
`quarto publish quarto-pub orientation-case-study-R.qmd`
When you publish to Quarto Pub using `quarto publish` an access token is used to grant permission for publishing to your account. The first time you publish to Quarto Pub the Quarto CLI will automatically launch a browser to authorize one as shown below.
``` {.bash filename="Terminal"}
$ quarto publish quarto-pub
? Authorize (Y/n) ›
❯ In order to publish to Quarto Pub you need to
authorize your account. Please be sure you are
logged into the correct Quarto Pub account in
your default web browser, then press Enter or
'Y' to authorize.
```
Authorization will launch your default web browser to confirm that you want to allow publishing from Quarto CLI. An access token will be generated and saved locally by the Quarto CLI.
Once you've authorized Quarto Pub and published your case study, it should take you immediately to the published document. See my example Orientation Case Study complete with answer key here: [https://sbkellogg.quarto.pub/laser-orientation-case-study-key](https://sbkellogg.quarto.pub/laser-orientation-case-study-key/).
After you've published your first document, you can continue adding more documents, slides, books and even publish entire websites!
![](img/quarto-pub.png){width="100%"}
#### Publishing with R Pubs
An alternative, and perhaps the easiest way to quickly publish your file online is to publish directly from RStudio using Posit Cloud or RPubs. You can do so by clicking the "Publish" button located in the Viewer Pane after you render your document and as illustrated in the screenshot below.
![](img/publish.png){width="100%"}
Similar to Quarto Pub, be sure that you have created a free Posit Cloud or R Pub account before attempting your first publish. You may also need to add your Posit Cloud or R Pub account before being able to publish.
See below for an example of my published Orientation Case Study complete with answer key here using:
- **R Pubs**: <https://rpubs.com/sbkellogg/orientation-case-study-key>
- **Posit Cloud**: <https://posit.cloud/content/8432811>
![](img/posit-cloud-pub.png){width="100%"}
### Your First LASER Badge!
Congratulations, you've completed your first case study!
Once you have shared a link to your published document with your instructor and they have reviewed your work, you will be provided a physical or digital version of the badge pictured below!
![](img/LASER_Hx.png){width="50%"}
### References