In the 2nd part of our tutorial on artificial neural networks, we cover 3 techniques to improve prediction accuracy: distortion, mini-batch gradient descent and dropout.
Do you know what gives red and white wine their colors? Use k-NN to discover the chemical make-up that defines typical types of wines, as well as to detect atypical ones.
Learn how random forests, an ensemble of decision trees, can help predict where and when a crime will happen in San Francisco, California.
Decision trees can be used to identify customer profiles or to predict who will resign. Using the Titanic dataset, learn about its advantages and pitfalls, as well as better alternatives.
You are exploring the nutritional content of food. How can food items be differentiated? How might they be classified? PCA derives underlying variables that help you slice your data for these insights.
Modern smartphone apps allow you to recognize handwriting and convert them into typed words. We look at how we can train our own neural network algorithm to do this.
While an artificial neural network could learn to recognize a cat on the left, it would not recognize the same cat if it appeared on the right. To solve this problem, we introduce convolutional neural networks.
You want to publish ads for your product. While you have 2 promising ad designs, you have a limited budget. How can you find out which ad is more effective, while maximizing the impact of all the ads you publish?
You have customers. But how should you categorize them to target sales? How many of such categories exist? To answer these questions, we can use cluster analysis.
Outliers can be detected by algorithms used for predictions. To illustrate, we use the k-nearest neighbor (kNN) clustering algorithm.
Latent Dirichlet allocation (LDA) is a technique that automatically discovers topics that a set of documents contain. It is used to analyze large volumes of text efficiently. To find out how it works, check out this tutorial.