X

    Get a Quote

    Things to Know about Machine Learning

    686 views
    Amit Shukla

    Why “Learn”?

    Machine learning is programming computers to optimize a performance criterion using example data or past experience.
    There is no need to “learn” to calculate payroll

    Learning is used when:

    -Human expertise does not exist (navigating on Mars),
    -Humans are unable to explain their expertise (speech recognition)
    -Solution changes in time (routing on a computer network)
    -Solution needs to be adapted to particular cases (user bio-metrics)

    What We Talk About When We Talk About“Learning”

    =>Learning general models from data of particular examples
    =>Data is cheap and abundant (data warehouses, data marts); knowledge is expensive and scarce.
    =>Example in retail: Customer transactions to consumer behavior:
    =>People who bought “Da Vinci Code” also bought “The Five People You Meet in Heaven”
    =>Build a model that is a good and useful approximation to the data.

    Data Mining/KDD

    Definition:= “KDD is the non-trivial process of identifying valid, novel, potentially useful, and
    ultimately understandable patterns in data”.

    Applications:

    =>Retail: Market basket analysis, Customer relationship management (CRM)
    =>Finance: Credit scoring, fraud detection
    =>Manufacturing: Optimization, troubleshooting
    =>Medicine: Medical diagnosis
    =>Telecommunications: Quality of service optimization
    =>Bioinformatics: Motifs, alignment
    =>Web mining: Search engines

    What is Machine Learning?

    Machine Learning
    =>Study of algorithms that
    =>improve their performance
    =>at some task
    =>with experience

    Optimize a performance criterion using example data or past experience.

    Role of Statistics: Inference from a sample

    Role of Computer science: Efficient algorithms to
    =>Solve the optimization problem
    =>Representing and evaluating the model for inference

    Growth of Machine Learning

    Machine learning is the preferred approach to
    =>Speech recognition, Natural language processing
    =>Computer vision
    =>Medical outcomes analysis
    =>Robot control
    =>Computational biology

    This trend is accelerating
    =>Improved machine learning algorithms
    =>Improved data capture, networking, faster computers
    =>Software too complex to write by hand
    =>New sensors / IO devices
    =>Demand for self-customization to the user, environment
    =>It turns out to be difficult to extract knowledge from human experts’ failure of expert systems in the 1980s.

    Applications:

    Association Analysis
    Supervised Learning
    =>Classification
    =>Regression/Prediction
    Unsupervised Learning
    Reinforcement Learning

    Classification: Applications

    =>Aka Pattern recognition
    =>Face recognition: Pose, lighting, occlusion (glasses, beard), make-up, hairstyle
    =>Character recognition: Different handwriting styles.
    =>Speech recognition: Temporal dependency.
    =>Use of a dictionary or the syntax of the language.
    =>Sensor fusion: Combine multiple modalities; eg, visual (lip image) and acoustic for speech
    =>Medical diagnosis: From symptoms to illnesses
    =>Web Advertising: Predict if a user clicks on an ad on the Internet.

    Supervised Learning: Uses

    Example: decision trees tools that create rules

    =>Prediction of future cases: Use the rule to predict the output for future inputs
    =>Knowledge extraction: The rule is easy to understand
    =>Compression: The rule is simpler than the data it explains
    =>Outlier detection: Exceptions that are not covered by the rule, e.g., fraud

    Unsupervised Learning

    =>Learning “what normally happens”
    =>No output
    =>Clustering: Grouping similar instances
    =>Other applications: Summarization, Association Analysis
    =>Example applications
    =>Customer segmentation in CRM
    =>Image compression: Color quantification
    =>Bioinformatics: Learning motifs

    Reinforcement Learning

    Topics:
    =>Policies: what actions should an agent take in a particular situation
    =>Utility estimation: how good is a state (used by policy)
    No supervised output but delayed reward
    Credit assignment problem (what was responsible for the outcome)
    Applications:
    =>Game playing
    =>Robot in a maze
    =>Multiple agents, partial observe-ability, …

    Machine learning algorithms can figure out how to perform important tasks by generalizing from examples. This is often feasible and cost-effective where manual programming is not. As more data becomes available, more ambitious problems can be tackled. As a result, machine learning is widely used in computer science and other fields. However, developing successful machine learning applications requires a substantial amount of “black art” that is hard to find in textbooks.

    INTRODUCTION

    Machine learning systems automatically learn programs from data. This is often a very attractive alternative to manually constructing them, and in the last decade, the use of machine learning has spread rapidly throughout computer science and beyond.

    Machine learning is used in Web search, spam filters, recommender systems, ad placement, credit scoring, fraud detection, stock trading, drug design, and many other applications. A recent report from a reputed Institute asserts that machine learning (a.k.a. data mining or predictive analytics) will be the driver of the next big wave of innovation.

    Several fine textbooks are available to interested practitioners and researchers. However, much of the “folk knowledge” that is needed to successfully develop machine learning applications are not readily available in them. As a result, many machine learning projects take much longer than necessary or wind up producing less-than-ideal results. Yet much of this folk knowledge is fairly easy to communicate. This is the purpose of this article. Many different types of machine learning exist, but for illustration purposes, I will focus on the most mature and widely used one: classification.

    Nevertheless, the issues I will discuss apply across all of machine learning. A classifier is a system that inputs (typically) a vector of discrete and/or continuous feature values and outputs a single discrete value, the class.
    For example, a spam filter classifies email messages into “ spam” or“ not spam,” and its input may be a Boolean vector x= (x1, . . . , xj, . . . , xd), where xj= 1 if the jth word in the dictionary appears in the email and xj= 0 otherwise. A learner inputs a training set of examples(xi, Yi), where xi= (xi,1, . . . , xi,d) is an observed input and Yi is the corresponding output and outputs a classifier.

    The test of the learner is whether this classifier produces the correct output yt for future examples xt(e.g., whether the spam filter correctly classifies previously unseen emails as spam or not spam) correct output yt for future examples xt (e.g., whether the spam filter correctly classifies previously unseen emails as spam or not spam).

    Mobile App Development Ad

    LEARNING

    Suppose you have an application that you think machine learning might be good for. The first problem facing you is the bewildering variety of learning algorithms available.

    Which one to use? There are literally thousands available, and hundreds more are published each year. The key to not getting lost in this huge space is to realize that it consists of combinations of just three components. The components are:

    Representation

    A classifier must be represented in some formal language that the computer can handle. Conversely, choosing a representation for a learner is tantamount to choosing the set of classifiers that it can possibly learn. This set is called the hypothesis space of the learner. If a classifier is not in the hypothesis space, it cannot be learned. A related question, which we will address in a later section, is how to represent the input, i.e., what features to use.

    Evaluation

    An evaluation function (also called objective function or scoring function) is needed to distinguish good classifiers from bad ones. The evaluation function used internally by the algorithm may differ from the external one that we want the classifier to optimize, for ease of optimization (see below) and due to the issues discussed in the next section.

    Optimization

    Finally, we need a method to search among the classifiers in the language for the highest-scoring one. The choice of optimization technique is key to the efficiency of the learner, and also helps determine the classifier produced if the evaluation function has more than one optimum. It is common for new learners to start out using off-the-shelf optimizers, which are later replaced by custom-designed ones.

    Continuous.

    App Development Company Add

    Thanks for reading our post “Things to Know about Machine Learning”, please connect with us for any further inquiry. We are Next Big Technology, a leading web & Mobile Application Development Company. We build high-quality applications to full fill all your business needs.

    Avatar for Amit
    The Author
    Amit Shukla
    Director of NBT
    Amit Shukla is the Director of Next Big Technology, a leading IT consulting company. With a profound passion for staying updated on the latest trends and technologies across various domains, Amit is a dedicated entrepreneur in the IT sector. He takes it upon himself to enlighten his audience with the most current market trends and innovations. His commitment to keeping the industry informed is a testament to his role as a visionary leader in the world of technology.