Internet provides access to plethora of information today. Whatever we need is just a Google (search) away. However, when we have so much of information, the challenge is to segregate between relevant and irrelevant information. When our brain is fed with a lot of information simultaneously, it tries hard to understand and classify the information between useful and not-so-useful information. We need a similar mechanism to classify incoming information as useful or less-useful in case of Neural Networks. This is a very important in the way a network learns because not all information is equally useful. Some of it is just noise. Well, activation functions help the network do this segregation. They help the network use the useful information and suppress the irrelevant data points. Let us go through these activation functions, how they work and figure out which activation functions fits well into what kind of problem statement.
This is the second of a series of posts on the task of applying machine learning for intraday stock price/return prediction. Price prediction is extremely crucial to most trading firms. People have been using various prediction techniques for many years. We will explore those techniques as well as recently popular algorithms like neural networks. In this post, we will focus on applying neural networks on the features derived from market data.
I don’t hand-mount disks much these days, but sometimes I want to spin up a quick EC2 instance to hammer something out, and I want to use the fast SSD storage that comes with some instances, and those disks don’t auto-mount.
An analytics economy is emerging, where organizations differentiate themselves and succeed with a blend of data, analytics, and collaboration. Data is everywhere. We’ve been talking about that since the big data craze began a few years ago. But now we’re seeing something different. It’s not just data. It’s accessible data, fueled by advances in computing power, connectivity, and powerful analytics. This mixture of data, analytics and the ability to collaborate forms the foundation for the analytics economy, where each insight sparks the next. It’s where similar to the concept of compounding interest, value comes from compounded insights. It’s where people work together with data and machines to accelerate innovation, creating a nonstop engine for progress. It is the right time to capitalize on the analytics economy, since analytics are now easier to use for everyone, from data scientists to business users to executives. The maturity and pervasiveness of analytics have increased their adoption, and a convergence of emerging technologies and existing capabilities is opening new possibilities.
This paper describes and motivates a new decision theory known as functional decision theory (FDT), as distinct from causal decision theory and evidential decision theory. Functional decision theorists hold that the normative principle for action is to treat one’s decision as the output of a ?xed mathematical function that answers the question, “Which output of this very function would yield the best outcome?” Adhering to this principle delivers a number of bene?ts, including the ability to maximize wealth in an array of traditional decision-theoretic and game-theoretic problems where CDT and EDT perform poorly. Using one simple and coherent decision rule, functional decision theorists (for example) achieve more utility than CDT on Newcomb’s problem, more utility than EDT on the smoking lesion problem, and more utility than both in Par?t’s hitchhiker problem. In this paper, we de?ne FDT, explore its prescriptions in a number of di?erent decision problems, compare it to CDT and EDT, and give philosophical justi?cations for FDT as a normative theory of decision-making.
Nowadays, a lot of interesting time series data is freely available that allows us to compare important economic, social and environmental trends across countries. I feel that one can learn a lot by surfing through the data sections on the websites of institutions like the Gapminder Foundation, the World Bank, or the OECD. At the same time, I am quite a big fan of learning facts with quiz questions. Since my internet search did not yield any apps or websites that present these interesting time series in forms of quizzes, I coded a bit in R and generated a Shiny app that creates such quizzes based on OECD data and some Eurostat data.
The first part left an open door to analyze Rick and Morty contents using tf-idf, bag-of-words or some other NLP techniques. Here I’m also taking a lot of ideas from Julia Silge’s blog.
Applying methods from Agile software development to data science projects. Insight comes from the 25th query in a chain of queries, not the first one. Data tables have to be parsed, formatted, sorted, aggregated, and summarized before they can be understood. Insightful charts typically come from the third or fourth attempt, not the first. Building accurate predictive models can take many iterations of feature engineering and hyperparameter tuning. In data science, iteration is the essential element to the extraction, visualization, and productization of insight. When we build, we iterate.
Much progress has been made over the past decade on process and tooling for managing large-scale, multitier, multicloud apps and APIs, but there is far less common knowledge on best practices for managing machine-learned models (classifiers, forecasters, etc.), especially beyond the modeling, optimization, and deployment process once these models are in production. Machine-learning and data science systems often fail in production in unexpected ways. David Talby shares real-world case studies showing why this happens and explains what you can do about it, covering best practices and lessons learned from a decade of experience building and operating such systems at Fortune 500 companies across several industries.
This article will take you through all steps required to build a simple feed-forward neural network in TensorFlow by explaining each step in details.