**Simulation-based power analysis using proportional odds logistic regression**

Consider planning a clinicial trial where patients are randomized in permuted blocks of size four to either a ‘control’ or ‘treatment’ group. The outcome is measured on an 11-point ordinal scale (e.g., the numerical rating scale for pain). It may be reasonable to evaluate the results of this trial using a proportional odds cumulative logit model (POCL), that is, if the proportional odds assumption is valid. The POCL model uses a series of ‘intercept’ parameters, denoted α 1 ≤…≤α r−1 , where r is the number of ordered categories, and ‘slope’ parameters β 1 ,…,β m , where m is the number of covariates. The intercept parameters encode the ‘baseline’, or control group frequencies of each category, and the slope parameters represent the effects of covariates (e.g., the treatment effect).

**Python Machine Learning Open Source Projects**

A collection of Python Machine learning open source projects.

**Python Modules for Data Science & Analytics**

A collection of important python modules for data scientists

**Simple Regime Change Detection with t-test**

It is always fun to find trend in time series data. But what about the scenarios where the trend in the time series changes. Detecting the point of this trend change can be quite beneficial. For example, if you can immediately detect the change in revenue regime of a company it can be very valuable to that company.

**Old is New: XML and rvest**

After my wonderful experience using dplyr and tidyr recently, I decided to revisit some of my old RUNNING code and see if it could use an upgrade by swapping out the XML dependency with rvest.

**Exact computation of sums and means**

A while ago, I came across a mention of the Python math.fsum function, which sums a set of floating-point values exactly, then rounds to the closest floating point value. This seemed useful. In particular, I thought that if it’s fast enough it could be used instead of R’s rather primitive two-pass approach to trying to compute the sample mean more accurately (but still not exactly). My initial thought was to just implement the algorithm Python uses in pqR. But I soon discovered that there were newer (and faster) algorithms. And then I thought that I might be able to do even better…

**CONCOR in R**

In network analysis, blockmodels provide a simplified representation of a more complex relational structure. The basic idea is to assign each actor to a position and then depict the relationship between positions. In settings where relational dynamics are sufficiently routinized, the relationship between positions neatly summarizes the relationship between sets of actors. How do we go about assigning actors to positions? Early work on this problem focused in particular on the concept of structural equivalence. Formally speaking, a pair of actors is said to be structurally equivalent if they are tied to the same set of alters. Note that by this definition, a pair of actors can be structurally equivalent without being tied to one another. This idea is central to debates over the role of cohesion versus equivalence.