Scanning all new published packages on PyPI I know that the quality is often quite bad. I try to filter out the worst ones and list here the ones which might be worth a look, being followed or inspire you in some way.

• **twit**

Tensor Weighted Interpolative Transfer. The purpose of TWIT is to allow transfer of values from a source tensor to a destination tensor with correct interpolation of values. The source and destination do not have to have the same number of dimensions, e.g. len(src.shape) != len(dst.shape). Also the range of indices do not have to match in count. For example given two one dimensional tensors (vectors of values) on could say copy from source range (2,7) to destination range (0,2) and use source to destination multipliers of (0.5, 0.9). This will copy source values at indicies 2,3,4,5,6,7 to destination indicies 0,1,2 (obviously ‘scale’ the data) and multiply source by 0.5 to go to destination and interpolate the multipler (weight) up to 0.9 for subsequent indicies.

• **gobbli**

Uniform interface to deep learning approaches via Docker containers. gobbli is currently experimental. We have used gobbli for project work in its current state, but not all model weights/variations or complex edge cases (such as distributed experiments) have been thoroughly tested. We appreciate your patience and help improving the library!

• **gpam-ml-lib**

Machine Learn library for kaggle problems.

• **jupyda**

IPython magic extension to run CUDA C code on jupyter notebooks. Ipython extension to execute CUDA C code in Jupyter notebook. Now you can easily use the power of Google colab GPUs with your CUDA C code.

• **jupyter-enterprise-gateway**

A web server for spawning and communicating with remote Jupyter kernels. A lightweight, multi-tenant, scalable and secure gateway that enables Jupyter Notebooks to share resources across distributed clusters such as Apache Spark, Kubernetes and others.

• **jupyter-geppetto**

Geppetto extension for Jupyter notebook. Experimental Jupyter notebook extension. This extension extends Jupyter Python server based on tornado that allows the client to establish a websocket connection and server static resources but there is no real functionality beyond that.

• **netcal**

Python Framework to calibrate confidence estimates of classifiers like Neural Networks. This framework is designed to calibrate the confidence estimates of classifiers like Neural Networks. Modern Neural Networks are likely to be overconfident with their predictions. However, reliable confidence estimates of such classifiers are crucial especially in safety-critical applications.

For example: given 100 predictions with a confidence of 80% of each prediction, the observed accuracy should also match 80% (neither more nor less). This behaviour is achievable with several calibration methods.

• **NMR-peaks-picking**

nmrglue is a module for working with NMR data in Python. When used with the numpy, scipy, and matplotlib packages nmrglue provides a robust interpreted environment for processing, analyzing, and inspecting NMR data.

• **oatlib**

Observation Analysis Tool: library to handle time series

• **plsa**

Probabilistic Latent Semantic Analysis

• **similarity**

Python library for measuring string similarity

• **simple-distributions**

A package for calculating probability density functions of common statistical distributions using numerical data and visualizing the results. The code herein creates probability density functions (PDFs) for Gaussian and binomial distributions based upon simple inputs. It also allows for modification of the PDFs based on external data file provided, as well as visualization of the data and PDF in question.

• **tf2-keras-pandas**

Easy and rapid deep learning – updated for tensorflow 2.0. keras-pandas allows users to rapidly build and iterate on deep learning models. Updated for tensorflow 2.0. Getting data formatted and into keras can be tedious, time consuming, and require domain expertise, whether your a veteran or new to Deep Learning. `keras-pandas` overcomes these issues by (automatically) providing:

• Data transformations: A cleaned, transformed and correctly formatted `X` and `y` (good for keras, sklearn or any other ML platform)

• Data piping: Correctly formatted keras input, hidden and output layers to quickly start iterating on.

These approaches are build on best in world approaches from practitioners, kaggle grand masters, papers, blog posts,

and coffee chats, to simple entry point into the world of deep learning, and a strong foundation for deep learning

experts.

• **tfloop**

tensorflow utils

• **yo-fluq-ds**

The toolkit for FluentPython, compatible with pandas, numpy and matplotlib

# What’s going on on PyPI

**08**
*Sunday*
Sep 2019

Posted Python Packages

in