So far in this book, when we’ve wanted the Wolfram Language to do something, we’ve written code to tell it exactly what to do. But the Wolfram Language is also set up to be able to learn what to do just by looking at examples, using the idea of machine learning. We’ll talk about how to train the language yourself. But first let’s look at some built-in functions that have already been trained on huge numbers of examples. LanguageIdentify takes pieces of text, and identifies what human language they’re in.
Arguably the most important application of machine learning in text analysis, the Word2Vec algorithm is both a fascinating and very useful tool. As the name suggests, it creates a vector representation of words based on the corpus we are using. But the magic of Word2Vec is how it manages to capture the semantic representation of words in a vector. The papers Efficient Estimation of Word Representations in Vector Space , Distributed Representations of Words and Phrases and their Composit… , and Linguistic Regularities in Continuous Space Word Representations
In this tutorial, you’ll learn the basic concepts and terminologies of reinforcement learning. At the end of the tutorial, we’ll discuss the epsilon-greedy algorithm for applying reinforcement learning based solutions.
We have previously added a set of company identity-agnostic predictors, such as the number of drivers a company employs, or the number of vehicles in the fleet with a hydraulic lift, and so on. we took this approach, rather than having each company as a unique predictor, so that the addition of a new contractor would not (necessarily) confuse our model.
Python has become a required skill for data science, and it’s easy to see why. It’s powerful, easy to learn, and includes the libraries like Pandas, Numpy, and Scikit that help you slice, scrub, munge, and wrangle your data. Even with a great language and fantastic tools though, there’s plenty to learn!
Artificial intelligence(AI) has been front and center of cloud platforms in the last few years. While the progress in cloud-AI technologies has been remarkable, I’ve always felt that most stacks in the space were missing some key elements to become runtimes for real world AI solutions. This week during the re:Invent conference, AWS announced a series of new releases that bring its SageMaker platform closer to the needs of real world machine learning solutions.
Decision Trees are a class of very powerful Machine Learning model cable of achieving high accuracy in many tasks while being highly interpretable. What makes decision trees special in the realm of ML models is really their clarity of information representation. The ‘knowledge’ learned by a decision tree through training is directly formulated into a hierarchical structure. This structure holds and displays the knowledge in such a way that it can easily be understood, even by non-experts
Advances in data lakes and big data go hand in hand. As data lakes make a big splash on the technology world, big data continues to evolve concurrently.
Today I want to solve a very popular NLP task called Named Entity Recognition (NER). In short, NER is a task of extracting Name Entities from a sequence of words (a sentence). For example …
The development of deep learning (DL) networks requires rapid prototyping when testing new models. For this reason, several TensorFlow-based libraries have been built, which abstract many programming concepts and provide high-level building blocks. Nobody wants to waste time solving problems that have already been solved before. And chances are the ones who implemented the high-level API will have been experts in that low level problem and will have done a better job at solving it than you ever could.
TEXT EDITORS (and the files they work) reveal surprisingly little about the history of editing. If you’re lucky, you get revisions to browse, and if not, you get undo/redo buttons. By adding temporal metadata to files, apps can display more than just the product–they can show process. This post introduces the writing graph, a timeline for viewing editing activity. A proof of concept below shows how new media artists, reflective writers and even casual readers can use this text visualization to learn more about what they’re reading.
How have U.S. economic recessions compared in severity? And how can severity be measured? These were the questions I was considering after identifying recessions for the final project of Introduction to Data Science in Python with Christopher Brooks. The following article outlines some key lessons learned in the process of analyzing the data in pandas and visualizing it with three.js.
QUTIS (Quantum Technologies for Information Science) Group researchers have succesfully realized quantum artificial life in IBM’s cloud quantum computer ibmqx4. This is a world’s first. It’s not just life that’s being simulated: the organisms even evolve. This was the goal: simulate natural evolution on a quantum computer. Why do I love this news? Because even though the organisms simulated are quite simple, the subject touches on a lot of fields and phenomena. More importantly, the experiment shows quantum supremacy: a clear advantage of quantum computing over classical computing. In what follows, I’ll give a few examples of what subjects the experiment touches, to provide some background of the news.