The Quest of Higher Accuracy for CNN Models

In this post, we will learn techniques to improve accuracy using data redesigning, hyper-parameter tuning and model optimization.


Converting a Simple Deep Learning Model from PyTorch to TensorFlow

TensorFlow and PyTorch are two of the more popular frameworks out there for deep learning. There are people who prefer TensorFlow for support in terms of deployment, and there are those who prefer PyTorch because of the flexibility in model building and training without the difficulties faced in using TensorFlow. The downside of using PyTorch is that the model built and trained using this framework cannot be deployed into production. To address the issue of deploying models built using PyTorch, one solution is to use ONNX (Open Neural Network Exchange). As explained in ONNX’s About page, ONNX is like a bridge that links the various deep learning frameworks together. To this end, the ONNX tool enables conversion of models from one framework to another. Up to the time of this writing, ONNX is limited to simpler model structures, but there may be further additions later on. This article will illustrate how a simple deep learning model can be converted from PyTorch to TensorFlow.


Huawei Launches AI-Native Database

Following the announcement of its AI strategy and full-stack, all-scenario AI solutions in 2018, Huawei launched the AI-Native database GaussDB and the highest-performance distributed storage FusionStorage 8.0 today in Beijing. The aim of this launch is to redefine data infrastructure through a Data + Intelligence strategy. ‘Humanity is entering the age of an intelligent world,’ said David Wang, Huawei Executive Director of the Board and President of ICT Strategy & Marketing. ‘Data is the new factor of production, and intelligence the new productivity. Heterogeneous, intelligent, and converged databases will become the key data infrastructure of the financial, government, and telecoms industries.’ Committed to building a fully connected, intelligent world, Huawei is a major contributor to ICT infrastructure and smart devices. The leading ICT product and solutions provider continues to invest and innovate in AI computing power, algorithms, and labeled data with many breakthroughs. Mr. Wang added, ‘AI-Native database GaussDB will help enhance HUAWEI CLOUD’s capabilities and fully unleash the power of diversified computing, which includes x86, ARM, GPU, and NPU computing. We aim to continuously push our AI strategy forward and foster a complete computing ecosystem. Together with our partners, we will move further towards the intelligent world.’ At the launch event, Mr. Wang also reiterated Huawei’s commitment to advancing intelligent industries by innovating together with customers and partners and building a data industry ecosystem on the principles of openness, collaboration, and shared success.


The Data Fabric Universe [Infographic]

Companies today recognize that exploiting the full value of their enterprise data to enable expanded use of analytics is an essential competitive battleground of the future. Data assets from operational and BI systems, big data sources, and unstructured and the cloud offer a competitive edge for those companies who can become data and analytic experts today. At Cambridge Semantics, we are focused on empowering companies to expand and accelerate the delivery of analytics-ready data to their business users by implementing a data discovery and integration layer as part of a modern data management architecture called the Enterprise Data Fabric. This discovery and integration layer sits above the company’s enterprise data assets and provides data consumers with a connected business-oriented map of enterprise data. People across the business with data or analytic needs use that map to explore, understand, connect and blend data in analytic-ready data sets that combine any data from any system across the enterprise.


Open Academic Graph

Open Academic Graph (OAG) is a large knowledge graph unifying two billion-scale academic graphs: Microsoft Academic Graph (MAG) and AMiner. In mid 2017, we published OAG v1, which contains 166,192,182 papers from MAG and 154,771,162 papers from AMiner (see below) and generated 64,639,608 linking (matching) relations between the two graphs. This time, in OAG v2, author, venue and newer publication data and the corresponding matchings are available.


Microsoft Academic Graph

The Microsoft Academic Graph is a heterogeneous graph containing scientific publication records, citation relationships between those publications, as well as authors, institutions, journals, conferences, and fields of study. This graph is used to power experiences in Bing, Cortana, Word, and in Microsoft Academic. The graph is currently being updated on a weekly basis.


The Convergence of RPA and Automated Machine Learning

By combining Robotic Process Automation (RPA) with artificial intelligence, organizations can expand both the range of processes available for automation and the business problems that can be solved with it. Using business logic and structured inputs to automate business processes, RPA software can mimic the routine tasks of human workers and free up their time to focus on more strategic goals. Combining RPA and automated machine learning, companies now have the ability to transform digital operating models.


Online Learning with O’Reilly

O’Reilly learning provides individuals, teams, and businesses with expert-created and curated information covering all the areas that will shape our future – including artificial intelligence, operations, data, UX design, finance, leadership, and more.


The Exploration-Exploitation Dilemma

Imagine that after a crazy urge to gamble all your life savings you enter into a casino. You have in front of you a line of slot machines. To be capable of becoming a millionaire you should play the one that has the highest probability of winning, but obviously, you don’t know which one is the right one. So you need to try all of them an indefinite number of times to determine which one’s better. If you play too much in each slot you can end up wasting all your money in losing machines. If you decide to stay in one that seems the one after a few tries, but it isn’t, you’ll end up losing all your money to it.


The exploration vs exploitation dilemma in our lives

In Reinforcement Learning algorithms, the dilemma of exploitation vs exploration is continuously present. Without ‘exploring’ new and unknown options, learning does not occur. We keep the knowledge we have acquired, then we act accordingly. That is, we ‘exploit’ what we have already learned, and repeat a behaviour. Frequently, it is risky exploring new options. We may take decisions that lead us to worse situations than the current ones. However, in the learning process, any ‘bad experience’ should be capitalized as new knowledge with the purpose of improving future rewards. Sometimes, as humans beings, we become frustrated with failure and slowly stop exploring the world. We lock ourselves in our ‘comfort zone’. From there, we are reinforced by conformist thoughts, and we repeat, again and again, unhelpful conducts. However, staying in the comfort zone causes, for some people, an internal distress that, without a new exploration phase, may generate dissatisfaction and, in some cases, illness.


Using Bayesian Games to Address the Exploration-Exploitation Dilemma in Deep Learning Systems

Artificial intelligence(AI) agents often operate in environments with partial or incomplete information. In those settings, agents are often forced to find a balance between exploring the environment or taking actions that yield an immediate reward. The exploration-exploitation dilemma is one of the fundamental frictions in modern AI systems particularly in heterogenous, multi-agents environments. Recently, researchers from Microsoft published a paper proposing a method of Bayesian incentives to find an adequate exploration-exploitation balance in multi-agent AI systems.


Singular Value Decomposition vs. Matrix Factoring in Recommender Systems

Recently, after watching the Recommender Systems class of Prof. Andrew Ng’s Machine Learning course, I found myself very discomforted not understanding how Matrix Factorization works. I know sometimes the math in Machine Learning is very obscure. It’s better if we think about it as a black box, but that model was very ‘magical’ for my standards. In such situations, I usually try to search on Google for more references to better grasp the concept. This time I got even more confused. While Prof. Ng called the algorithm as (Low Factor) Matrix Factorization, I found a different nomenclature on the internet: Singular Value Decomposition. What confused me the most was that Singular Value Decomposition was very different from what Prof. Ng had taught. People kept suggesting they were both the same thing. In this text, I will summarize my findings and try to clear up some of the confusion those terms can cause.


Towards Noise Robust Machine Learning

How we established state-of-the-art noise robustness using a simple loss function. In real life, data is often dirtier than a college kid’s kitchen pantry during finals week. Perturbations of all kinds (lighting differences, background speech, spelling errors, etc.) make our world a place full of confounding signals. As individuals in a world of noise, how do we cope? How did we evolve to become noise robust agents?


Understanding Unstructured Data With Language Models

Much of our machine learning capabilities come from structured data, but the real payload lies in the messy, unstructured data underneath. If we want to gain practical insights, machines have to learn to parse things like social media posts filled with misspellings or sarcasm or handwritten doctor’s notes with illegible lettering. So how do machines do this? Alex Peattie, the co-founder of PEG, has thoughts on where we’ve been with language models in the past and how they may help machines decipher these difficulties.


The ML Test Score: A Rubric for ML Production Readiness and Technical Debt Reduction

Creating reliable, production-level machine learning systems brings on a host of concerns not found in small toy examples or even large offline research experiments. Testing and monitoring are key considerations for ensuring the production-readiness of an ML system, and for reducing technical debt of ML systems. But it can be difficult to formulate specific tests, given that the actual prediction behavior of any given model is difficult to specify a priori. In this paper, we present 28 specific tests and monitoring needs, drawn from experience with a wide range of production ML systems to help quantify these issues and present an easy to follow road-map to improve production readiness and pay down ML technical debt.
Advertisements