Profit-Maximizing A/B Test google
Marketers often use A/B testing as a tactical tool to compare marketing treatments in a test stage and then deploy the better-performing treatment to the remainder of the consumer population. While these tests have traditionally been analyzed using hypothesis testing, we re-frame such tactical tests as an explicit trade-off between the opportunity cost of the test (where some customers receive a sub-optimal treatment) and the potential losses associated with deploying a sub-optimal treatment to the remainder of the population. We derive a closed-form expression for the profit-maximizing test size and show that it is substantially smaller than that typically recommended for a hypothesis test, particularly when the response is noisy or when the total population is small. The common practice of using small holdout groups can be rationalized by asymmetric priors. The proposed test design achieves nearly the same expected regret as the flexible, yet harder-to-implement multi-armed bandit. We demonstrate the benefits of the method in three different marketing contexts — website design, display advertising and catalog tests — in which we estimate priors from past data. In all three cases, the optimal sample sizes are substantially smaller than for a traditional hypothesis test, resulting in higher profit. …

Principle of Minimum Differentiation google
Hotelling’s law is an observation in economics that in many markets it is rational for producers to make their products as similar as possible. This is also referred to as the principle of minimum differentiation as well as Hotelling’s linear city model. The observation was made by Harold Hotelling (1895-1973) in the article ‘Stability in Competition’ in Economic Journal in 1929. The opposing phenomenon is product differentiation, which is usually considered to be a business advantage if executed properly. …

Multitask Learning Encoder (MTLE) google
Learning visual feature representations for video analysis is a daunting task that requires a large amount of training samples and a proper generalization framework. Many of the current state of the art methods for video captioning and movie description rely on simple encoding mechanisms through recurrent neural networks to encode temporal visual information extracted from video data. In this paper, we introduce a novel multitask encoder-decoder framework for automatic semantic description and captioning of video sequences. In contrast to current approaches, our method relies on distinct decoders that train a visual encoder in a multitask fashion. Our system does not depend solely on multiple labels and allows for a lack of training data working even with datasets where only one single annotation is viable per video. Our method shows improved performance over current state of the art methods in several metrics on multi-caption and single-caption datasets. To the best of our knowledge, our method is the first method to use a multitask approach for encoding video features. Our method demonstrates its robustness on the Large Scale Movie Description Challenge (LSMDC) 2017 where our method won the movie description task and its results were ranked among other competitors as the most helpful for the visually impaired. …

Inspection Paradox google
Suppose you are told to inspect a collection of lightbulbs, and there is a 90% of them are faulty and blow out immediately, while 10% of them have a lifetime of 1 month. If you arrive at a random time t 1 t1 and inspect the lifetime duration until it burns (and then leave and come back at some other random time t 2 ,t 2 >t 1 t2,t2>t1), then it is most certain that you will only be checking the good one’s (because all the faulty one’s burn out immediately!), yet these only represent 10% of the total group! …