What Is An Enterprise Knowledge Graph and Why Do I Want One?

Enterprise Knowledge Graphs have been on the rise. We see them as an incredibly valuable tool for relating your structured and unstructured information and discovering facts about your organization. Yet, knowledge graphs have been and still are far too underutilized. Organizations are still struggling to find and, more importantly, discover their valuable content. To take it a step further, knowledge graphs are a prerequisite for achieving smart, semantic artificial intelligence applications (AI) that can help you discover facts from your content, data, and organizational knowledge, which otherwise would go unnoticed. A smart semantic AI application, whether it is a chatbot, a cognitive search utilizing Natural Language Processing (NLP), or a recommendation engine, leverages your enterprise knowledge graph to extract, relate, and deliver answers, recommendations, and insights. With semantic technologies, several terms have been thrown around, such as ontology, triple store, semantic data model, graph database, and knowledge graph. And that is before we even get into the standards like SKOS, RDF, OWL, etc. While it is easy to get into the details, for the purposes of this blog, I will focus on a high level overview of the components that make up an enterprise knowledge graph.


The State of Machine Learning Adoption in the Enterprise

While the use of machine learning (ML) in production started near the turn of the century, it’s taken roughly 20 years for the practice to become mainstream throughout industry. With this report, you’ll learn how more than 11,000 data specialists responded to a recent O’Reilly survey about their organization’s approach – or intended approach – to machine learning. Data scientists, machine learning engineers, and deep learning engineers throughout the world answered detailed questions about their organization’s level of ML adoption. About half of the respondents work for enterprises in the early stages of exploring ML, while the rest have moderate or extensive experience deploying ML models to production.


When ‘Zoë’ !== ‘Zoë’. Or why you need to normalize Unicode strings

Never heard of Unicode normalization? You’re not alone. But it will save you a lot of trouble.


Iodide: an experimental tool for scientific communication and exploration on the web

Iodide lets you do data science entirely in your browser. Create, share, collaborate, and reproduce powerful reports and visualizations with tools you already know. In the last 10 years, there has been an explosion of interest in ‘scientific computing’ and ‘data science’: that is, the application of computation to answer questions and analyze data in the natural and social sciences. To address these needs, we’ve seen a renaissance in programming languages, tools, and techniques that help scientists and researchers explore and understand data and scientific concepts, and to communicate their findings. But to date, very few tools have focused on helping scientists gain unfiltered access to the full communication potential of modern web browsers. So today we’re excited to introduce Iodide, an experimental tool meant to help scientists write beautiful interactive documents using web technologies, all within an iterative workflow that will be familiar to many scientists.


Jupyter Lab: Evolution of the Jupyter Notebook

An overview of JupyterLab, the next generation of the Jupyter Notebook. Data says there are more than three million Jupyter Notebooks available publicly on Github. There is roughly a similar number of private ones too. Even without this data, we are quite aware of the popularity of the notebooks in the Data Science domain. The possibility of writing codes, inspecting the results, getting rich outputs are some of the features that really made Jupyter Notebooks very popular. But as it is said that all good things (must) come to an end, so will our favourite Notebook too. JupyterLab will eventually replace the classic Jupyter Notebook but for good.


Radical Change Is Coming To Data Science Jobs

Within 10 years, data science will be so enmeshed within industry-specific applications and broad productivity tools that we may no longer think of it is a hot career. Just as generations of math and statistics students have gone on to fill all manner of roles in business and academia without thinking of themselves as mathematicians or statisticians, the newly minted data scientist grads will be tomorrow’s manufacturing engineers, marketing leaders and medical researchers.


12 things I wish I’d known before starting as a Data Scientist

1. ‘Data science’ is a vague term, so treat it accordingly
2. Imposter syndrome is a normal part of the job
3. You’ll never have to know all the tools
4. However, learn your basic tools well
5. You’re an expert in a domain, not just methods
6. The most important skill is critical thinking
7. Take relevant classes – not just technical classes
8. Practice communication – written, visual, and verbal
9. Work on real data problems
10.Publish your work and get feedback however you can
11. Go to events – hackathons, conferences, meetups
12. Be flexible with how you enter the field


Let’s build an Article Recommender using LDA

Due to keen interest in learning new topics, I decided to work on a project where a Latent Dirichlet Allocation (LDA) model can recommend Wikipedia articles based on a search phrase. This article explains my approach towards building the project in Python. Check out the project on GitHub below.


Light on Math ML: Attention with Keras

In this article, first you will grok what a sequence to sequence model is, followed by why attention is important for sequential models? Next you will learn the nitty-gritties of the attention mechanism. This blog post will end by explaining how to use the attention layer.


Robotic Control with Graph Networks

Machine learning is helping to transform many fields across diverse industries, as anyone interested in technology undoubtedly knows. Things like computer vision and natural language processing were changed dramatically due to deep learning algorithms in the past few years, and the effects of that change are seeping in to our daily lives. One of the fields that artificial intelligence is expected to make drastic changes to, is the field of robotics. Decades ago, science fiction writers envisioned robots powered by artificial intelligence interacting with human society and either helping solve humanity’s problems or trying to destroy human-kind. Our reality is far from it, and we understand today that creating intelligent robots is a harder challenge than was expected back in those days. Robots must sense the world and understand their environment, they must reason about their goals and how to achieve them, and execute their plans using their actuation means.


Skip-Gram: NLP context words prediction algorithm

NLP is a field of Artificial Intelligence in which we try to process human language as text or speech to make computers similar to humans. Humans have a large amount of data written in a very unorganized format. So, it’s difficult for any machine to find meaning from raw text. To make a machine learn from the raw text we need to transform this data into a vector format which then can easily be processed by our computers. This transformation of raw text into a vector format is known as word representation.


PCA and SVD explained with numpy

How exactly are principal component analysis and singular value decomposition related and how to implement using numpy.


Hyper-parameter Tuning Techniques in Deep Learning

The process of setting the hyper-parameters requires expertise and extensive trial and error. There are no simple and easy ways to set hyper-parameters?-?specifically, learning rate, batch size, momentum, and weight decay.


How to create professional reports from R scripts, with custom styles.

If the practical tips for R Markdown post we talked briefly about how we can easily create professional reports directly from R scripts, without the need for converting them manually to Rmd and creating code chunks. In this one, we will provide useful tips on advanced options for styling, using themes and producing light-weight HTML reports directly from R scripts. We will also provide a repository with example R script and rendering code to get different styled and sized outputs easily.


Developing a DCGAN Model in Tensorflow 2.0

In early March 2019, TensorFlow 2.0 was released and we decided to create an image generator based on Taehoon Kim’s implementation of DCGAN. Here’s a tutorial on how to develop a DCGAN model in TensorFlow 2.0. ‘To avoid the fast convergence of D (discriminator) network, G (generator) network is updated twice for each D network update, which differs from original paper.’ • Taehoon Kim
Advertisements