Blog

Some Thoughts After Reading <Statistical Rethinking>

Last night, I read the 1st chapter of Statistical Rethinking: A Bayesian Course with Examples in R and Stan from Richard McElreath. I found this is nice book to share with my friends. The core idea of that chapter is the relationship between NULL hypothesis and statistical model. That is, should we trust the statistical models to reject the NULL hypothesis? It is a long history to use statistical models to figure out what is true?

Use R as a bash language

R language could be easily used as a bash script using Rscript *.R. system() is a R base function which could run command line within R. Below is a simple example which allows to automate create a new blog post: (1) Ask users to type in filename, title and language (2) Create a new markdown file in specific directory (i.e. your local posts saved path) (3) Add some metadata in .

Introduction to Latent Attribute Network Analysis

A brief introduction of using Network Model to visualize latent attributes' hierarchy of Diagnostic Modeling.

Make a Game in R

Recently I found a interesting R package call nessy which allows you to create a simple game driven by shiny. Thus. I tried a little bit about this package. Making a interactive app in R is promising in the files like teaching, presentation and visualization. Finally, I created the following shiny app: library(nessy) library(shinyjs) jscode <- "shinyjs.closeWindow = function() { window.close(); }" ui <- cartridge( title = "{Memorize the Names!

Visuliaztion of Item Information Curves In R

Item Information Curve This blog is to show a new way to display item information curves using ggridges package. The ridge plot could show the IIF plots very clear when you have a large number of items. ggplot(item_information_all %>% filter(item %in% 17:22)) + aes(x = factor, y = item, height = info, group = item, color = as.factor(item), fill = as.factor(item)) + ggridges::geom_ridgeline(alpha = 0.75) + ggtitle("Peer Social Capital: Information Functions") + theme_ridges()

How to do Data Cleaning in R

Libraries Step 1: Import Data Step 2: Initial Check Step 2.1: check variables step 2.2: check missing values and ranges step 2.3: check first and last cases Step 3: Select and rename Variables Step 4: Remove missing values This blog is trying to elaborate steps for cleaning the data. Since datasets varied, this blog could not cover all. Depedent on the data you’re using, different methods should be used.

Updating R Version Without missing packages

After updating to new R version (4.5) from old version, you have to re-install all packages by default. However, there’re some solution for that. Unix (MacOs, Linux) 1.Create a new folder in home directory to store the packages. Sometimes, you need to change the permission level for this folder, or R may not have access to write this folder. Rlibs is a special folder where you can store all you packages.

Academic Writing

This post is aimed to remind myself how to write articles with Academic Writing Style. The original article is from http://libguides.usc.edu/writingguide/academicwriting. I. The Big Picture Unlike fiction or journalistic writing, the overall structure of academic writing is formal and logical. It must be cohesive and possess a logically organized flow of ideas; this means that the various parts are connected to form a unified whole. There should be narrative links

Parental Involvement and Children Motivation

One big problem in education study is the longitudinal effect of explanatory factor is always ignored. One of the reasons is the requirements of longitudinal data hardly be met. As a large number of educational studies made used of cross-sectional data to make conclusion, there will be severe issue. For instance, the effect could be positive in one year, however, the effect become smaller and smaller and eventually becomes negative.

EFA v.s. CFA

Big question I always found exploratory tools and confirmatory tools have distinct fans. The fans of exploratory tools believe the conclusion should be data-driven, nothing else beyond data is needed in order to keep object. On the other hand, some confirmatory fans believe that data could provide nothing without context. Daniel (1988) stated that factor analysis is “designed to examine the covariance structure of a set of variables and to provide an explanation of the relationships among those variables in terms of a smaller number of unobserved latent variables called factors.