Available Master's thesis topics in machine learning

Main content.

Here we list topics that are available. You may also be interested in our list of completed Master's theses .

Learning and inference with large Bayesian networks

Most learning and inference tasks with Bayesian networks are NP-hard. Therefore, one often resorts to using different heuristics that do not give any quality guarantees.

Task: Evaluate quality of large-scale learning or inference algorithms empirically.

Advisor: Pekka Parviainen

Sum-product networks

Traditionally, probabilistic graphical models use a graph structure to represent dependencies and independencies between random variables. Sum-product networks are a relatively new type of a graphical model where the graphical structure models computations and not the relationships between variables. The benefit of this representation is that inference (computing conditional probabilities) can be done in linear time with respect to the size of the network.

Potential thesis topics in this area: a) Compare inference speed with sum-product networks and Bayesian networks. Characterize situations when one model is better than the other. b) Learning the sum-product networks is done using heuristic algorithms. What is the effect of approximation in practice?

Bayesian Bayesian networks

The naming of Bayesian networks is somewhat misleading because there is nothing Bayesian in them per se; A Bayesian network is just a representation of a joint probability distribution. One can, of course, use a Bayesian network while doing Bayesian inference. One can also learn Bayesian networks in a Bayesian way. That is, instead of finding an optimal network one computes the posterior distribution over networks.

Task: Develop algorithms for Bayesian learning of Bayesian networks (e.g., MCMC, variational inference, EM)

Large-scale (probabilistic) matrix factorization

The idea behind matrix factorization is to represent a large data matrix as a product of two or more smaller matrices.They are often used in, for example, dimensionality reduction and recommendation systems. Probabilistic matrix factorization methods can be used to quantify uncertainty in recommendations. However, large-scale (probabilistic) matrix factorization is computationally challenging.

Potential thesis topics in this area: a) Develop scalable methods for large-scale matrix factorization (non-probabilistic or probabilistic), b) Develop probabilistic methods for implicit feedback (e.g., recommmendation engine when there are no rankings but only knowledge whether a customer has bought an item)

Bayesian deep learning

Standard deep neural networks do not quantify uncertainty in predictions. On the other hand, Bayesian methods provide a principled way to handle uncertainty. Combining these approaches leads to Bayesian neural networks. The challenge is that Bayesian neural networks can be cumbersome to use and difficult to learn.

The task is to analyze Bayesian neural networks and different inference algorithms in some simple setting.

Deep learning for combinatorial problems

Deep learning is usually applied in regression or classification problems. However, there has been some recent work on using deep learning to develop heuristics for combinatorial optimization problems; see, e.g., [1] and [2].

Task: Choose a combinatorial problem (or several related problems) and develop deep learning methods to solve them.

References: [1] Vinyals, Fortunato and Jaitly: Pointer networks. NIPS 2015. [2] Dai, Khalil, Zhang, Dilkina and Song: Learning Combinatorial Optimization Algorithms over Graphs. NIPS 2017.

Advisors: Pekka Parviainen, Ahmad Hemmati

Estimating the number of modes of an unknown function

Mode seeking considers estimating the number of local maxima of a function f. Sometimes one can find modes by, e.g., looking for points where the derivative of the function is zero. However, often the function is unknown and we have only access to some (possibly noisy) values of the function. 

In topological data analysis,  we can analyze topological structures using persistent homologies. For 1-dimensional signals, this can translate into looking at the birth/death persistence diagram, i.e. the birth and death of connected topological components as we expand the space around each point where we have observed our function. These observations turn out to be closely related to the modes (local maxima) of the function. A recent paper [1] proposed an efficient method for mode seeking.

In this project, the task is to extend the ideas from [1] to get a probabilistic estimate on the number of modes. To this end, one has to use probabilistic methods such as Gaussian processes.

[1] U. Bauer, A. Munk, H. Sieling, and M. Wardetzky. Persistence barcodes versus Kolmogorov signatures: Detecting modes of one-dimensional signals. Foundations of computational mathematics17:1 - 33, 2017.

Advisors:  Pekka Parviainen ,  Nello Blaser

Causal Abstraction Learning

We naturally make sense of the world around us by working out causal relationships between objects and by representing in our minds these objects with different degrees of approximation and detail. Both processes are essential to our understanding of reality, and likely to be fundamental for developing artificial intelligence. The first process may be expressed using the formalism of structural causal models, while the second can be grounded in the theory of causal abstraction.        

This project will consider the problem of learning an abstraction between two given structural causal models. The primary goal will be the development of efficient algorithms able to learn a meaningful abstraction between the given causal models.

Advisor: Fabio Massimo Zennaro

Causal Bandits

"Multi-armed bandit" is an informal name for slot machines, and the formal name of a large class of problems where an agent has to choose an action among a range of possibilities without knowing the ensuing rewards. Multi-armed bandit problems are one of the most essential reinforcement learning problems where an agent is directly faced with an exploitation-exploration trade-off.

This project will consider a class of multi-armed bandits where an agent, upon taking an action, interacts with a causal system. The primary goal will be the development of learning strategies that takes advantage of the underlying causal system in order to learn optimal policies in a shortest amount of time.

Causal Modelling for Battery Manufacturing

Lithium-ion batteries are poised to be one of the most important sources of energy in the near future. Yet, the process of manufacturing these batteries is very hard to model and control. Optimizing the different phases of production to maximize the lifetime of the batteries is a non-trivial challenge since physical models are limited in scope and collecting experimental data is extremely expensive and time-consuming.        

This project will consider the problem of aggregating and analyzing data regarding a few stages in the process of battery manufacturing. The primary goal will be the development of algorithms for transporting and integrating data collected in different contexts, as well as the use of explainable algorithms to interpret them.

Reinforcement Learning for Computer Security

The field of computer security presents a wide variety of challenging problems for artificial intelligence and autonomous agents. Guaranteeing the security of a system against attacks and penetrations by malicious hackers has always been a central concern of this field, and machine learning could now offer a substantial contribution. Security capture-the-flag simulations are particularly well-suited as a testbed for the application and development of reinforcement learning algorithms.

This project will consider the use of reinforcement learning for the preventive purpose of testing systems and discovering vulnerabilities before they can be exploited. The primary goal will be the modelling of capture-the-flag challenges of interest and the development of reinforcement learning algorithms that can solve them.

Approaches to AI Safety

The world and the Internet are more and more populated by artificial autonomous agents carrying out tasks on our behalf. Many of these agents are provided with an objective and they learn their behaviour trying to achieve their objective as best as they can. However, this approach can not guarantee that an agent, while learning its behaviour, will not undertake actions that may have unforeseen and undesirable effects. Research in AI safety tries to design autonomous agent that will behave in a predictable and safe way. 

This project will consider specific problems and novel solution in the domain of AI safety and reinforcement learning. The primary goal will be the development of innovative algorithms and their implementation withing established frameworks.

The Topology of Flight Paths

Air traffic data tells us the position, direction, and speed of an aircraft at a given time. In other words, if we restrict our focus to a single aircraft, we are looking at a multivariate time-series. We can visualize the flight path as a curve above earth's surface quite geometrically. Topological data analysis (TDA) provides different methods for analysing the shape of data. Consequently, TDA may help us to extract meaningful features from the air traffic data. Although the typical flight path shapes may not be particularly intriguing, we can attempt to identify more intriguing patterns or “abnormal” manoeuvres, such as aborted landings, go-arounds, or diverts.

Advisor:  Odin Hoff Gardå , Nello Blaser

Automatic hyperparameter selection for isomap

Isomap is a non-linear dimensionality reduction method with two free hyperparameters (number of nearest neighbors and neighborhood radius). Different hyperparameters result in dramatically different embeddings. Previous methods for selecting hyperparameters focused on choosing one optimal hyperparameter. In this project, you will explore the use of persistent homology to find parameter ranges that result in stable embeddings. The project has theoretic and computational aspects.

Advisor: Nello Blaser

Validate persistent homology

Persistent homology is a generalization of hierarchical clustering to find more structure than just the clusters. Traditionally, hierarchical clustering has been evaluated using resampling methods and assessing stability properties. In this project you will generalize these resampling methods to develop novel stability properties that can be used to assess persistent homology. This project has theoretic and computational aspects.

Topological Ancombs quartet

This topic is based on the classical Ancombs quartet and families of point sets with identical 1D persistence ( https://arxiv.org/abs/2202.00577 ). The goal is to generate more interesting datasets using the simulated annealing methods presented in ( http://library.usc.edu.ph/ACM/CHI%202017/1proc/p1290.pdf ). This project is mostly computational.

Persistent homology vectorization with cycle location

There are many methods of vectorizing persistence diagrams, such as persistence landscapes, persistence images, PersLay and statistical summaries. Recently we have designed algorithms to in some cases efficiently detect the location of persistence cycles. In this project, you will vectorize not just the persistence diagram, but additional information such as the location of these cycles. This project is mostly computational with some theoretic aspects.

Divisive covers

Divisive covers are a divisive technique for generating filtered simplicial complexes. They original used a naive way of dividing data into a cover. In this project, you will explore different methods of dividing space, based on principle component analysis, support vector machines and k-means clustering. In addition, you will explore methods of using divisive covers for classification. This project will be mostly computational.

Learning Acquisition Functions for Cost-aware Bayesian Optimization

This is a follow-up project of an earlier Master thesis that developed a novel method for learning Acquisition Functions in Bayesian Optimization through the use of Reinforcement Learning. The goal of this project is to further generalize this method (more general input, learned cost-functions) and apply it to hyperparameter optimization for neural networks.

Advisors: Nello Blaser , Audun Ljone Henriksen

Stable updates

This is a follow-up project of an earlier Master thesis that introduced and studied empirical stability in the context of tree-based models. The goal of this project is to develop stable update methods for deep learning models. You will design sevaral stable methods and empirically compare them (in terms of loss and stability) with a baseline and with one another.

Advisors:  Morten Blørstad , Nello Blaser

Multimodality in Bayesian neural network ensembles

One method to assess uncertainty in neural network predictions is to use dropout or noise generators at prediction time and run every prediction many times. This leads to a distribution of predictions. Informatively summarizing such probability distributions is a non-trivial task and the commonly used means and standard deviations result in the loss of crucial information, especially in the case of multimodal distributions with distinct likely outcomes. In this project, you will analyze such multimodal distributions with mixture models and develop ways to exploit such multimodality to improve training. This project can have theoretical, computational and applied aspects.

Learning a hierarchical metric

Often, labels have defined relationships to each other, for instance in a hierarchical taxonomy. E.g. ImageNet labels are derived from the WordNet graph, and biological species are taxonomically related, and can have similarities depending on life stage, sex, or other properties.

ArcFace is an alternative loss function that aims for an embedding that is more generally useful than softmax. It is commonly used in metric learning/few shot learning cases.

Here, we will develop a metric learning method that learns from data with hierarchical labels. Using multiple ArcFace heads, we will simultaneously learn to place representations to optimize the leaf label as well as intermediate labels on the path from leaf to root of the label tree. Using taxonomically classified plankton image data, we will measure performance as a function of ArcFace parameters (sharpness/temperature and margins -- class-wise or level-wise), and compare the results to existing methods.

Advisor: Ketil Malde ( [email protected] )

Self-supervised object detection in video

One challenge with learning object detection is that in many scenes that stretch off into the distance, annotating small, far-off, or blurred objects is difficult. It is therefore desirable to learn from incompletely annotated scenes, and one-shot object detectors may suffer from incompletely annotated training data.

To address this, we will use a region-propsal algorithm (e.g. SelectiveSearch) to extract potential crops from each frame. Classification will be based on two approaches: a) training based on annotated fish vs random similarly-sized crops without annotations, and b) using a self-supervised method to build a representation for crops, and building a classifier for the extracted regions. The method will be evaluated against one-shot detectors and other training regimes.

If successful, the method will be applied to fish detection and tracking in videos from baited and unbaited underwater traps, and used to estimate abundance of various fish species.

See also: Benettino (2016): https://link.springer.com/chapter/10.1007/978-3-319-48881-3_56

Representation learning for object detection

While traditional classifiers work well with data that is labeled with disjoint classes and reasonably balanced class abundances, reality is often less clean. An alternative is to learn a vectors space embedding that reflects semantic relationships between objects, and deriving classes from this representation. This is especially useful for few-shot classification (ie. very few examples in the training data).

The task here is to extend a modern object detector (e.g. Yolo v8) to output an embedding of the identified object. Instead of a softmax classifier, we can learn the embedding either in a supervised manner (using annotations on frames) by attaching an ArcFace or other supervised metric learning head. Alternatively, the representation can be learned from tracked detections over time using e.g. a contrastive loss function to keep the representation for an object (approximately) constant over time. The performance of the resulting object detector will be measured on underwater videos, targeting species detection and/or indiviual recognition (re-ID).

Time-domain object detection

Object detectors for video are normally trained on still frames, but it is evident (from human experience) that using time domain information is more effective. I.e., it can be hard to identify far-off or occluded objects in still images, but movement in time often reveals them.

Here we will extend a state of the art object detector (e.g. yolo v8) with time domain data. Instead of using a single frame as input, the model will be modified to take a set of frames surrounding the annotated frame as input. Performance will be compared to using single-frame detection.

Large-scale visualization of acoustic data

The Institute of Marine Research has decades of acoustic data collected in various surveys. These data are in the process of being converted to data formats that can be processed and analyzed more easily using packages like Xarray and Dask.

The objective is to make these data more accessible to regular users by providing a visual front end. The user should be able to quickly zoom in and out, perform selection, export subsets, apply various filters and classifiers, and overlay annotations and other relevant auxiliary data.

Learning acoustic target classification from simulation

Broadband echosounders emit a complex signal that spans a large frequency band. Different targets will reflect, absorb, and generate resonance at different amplitudes and frequencies, and it is therefore possible to classify targets at much higher resolution and accuracy than before. Due to the complexity of the received signals, deriving effective profiles that can be used to identify targets is difficult.

Here we will use simulated frequency spectra from geometric objects with various shapes, orientation, and other properties. We will train ML models to estimate (recover) the geometric and material properties of objects based on these spectra. The resulting model will be applied to read broadband data, and compared to traditional classification methods.

Online learning in real-time systems

Build a model for the drilling process by using the Virtual simulator OpenLab ( https://openlab.app/ ) for real-time data generation and online learning techniques. The student will also do a short survey of existing online learning techniques and learn how to cope with errors and delays in the data.

Advisor: Rodica Mihai

Building a finite state automaton for the drilling process by using queries and counterexamples

Datasets will be generated by using the Virtual simulator OpenLab ( https://openlab.app/ ). The student will study the datasets and decide upon a good setting to extract a finite state automaton for the drilling process. The student will also do a short survey of existing techniques for extracting finite state automata from process data. We present a novel algorithm that uses exact learning and abstraction to extract a deterministic finite automaton describing the state dynamics of a given trained RNN. We do this using Angluin's L*algorithm as a learner and the trained RNN as an oracle. Our technique efficiently extracts accurate automata from trained RNNs, even when the state vectors are large and require fine differentiation.arxiv.org

Scaling Laws for Language Models in Generative AI

Large Language Models (LLM) power today's most prominent language technologies in Generative AI like ChatGPT, which, in turn, are changing the way that people access information and solve tasks of many kinds.

A recent interest on scaling laws for LLMs has shown trends on understanding how well they perform in terms of factors like the how much training data is used, how powerful the models are, or how much computational cost is allocated. (See, for example, Kaplan et al. - "Scaling Laws for Neural Language Models”, 2020.)

In this project, the task will consider to study scaling laws for different language models and with respect with one or multiple modeling factors.

Advisor: Dario Garigliotti

Applications of causal inference methods to omics data

Many hard problems in machine learning are directly linked to causality [1]. The graphical causal inference framework developed by Judea Pearl can be traced back to pioneering work by Sewall Wright on path analysis in genetics and has inspired research in artificial intelligence (AI) [1].

The Michoel group has developed the open-source tool Findr [2] which provides efficient implementations of mediation and instrumental variable methods for applications to large sets of omics data (genomics, transcriptomics, etc.). Findr works well on a recent data set for yeast [3].

We encourage students to explore promising connections between the fiels of causal inference and machine learning. Feel free to contact us to discuss projects related to causal inference. Possible topics include: a) improving methods based on structural causal models, b) evaluating causal inference methods on data for model organisms, c) comparing methods based on causal models and neural network approaches.

References:

1. Schölkopf B, Causality for Machine Learning, arXiv (2019):  https://arxiv.org/abs/1911.10500

2. Wang L and Michoel T. Efficient and accurate causal inference with hidden confounders from genome-transcriptome variation data. PLoS Computational Biology 13:e1005703 (2017).  https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005703

3. Ludl A and and Michoel T. Comparison between instrumental variable and mediation-based methods for reconstructing causal gene networks in yeast. arXiv:2010.07417  https://arxiv.org/abs/2010.07417

Advisors: Adriaan Ludl ,  Tom Michoel

Space-Time Linkage of Fish Distribution to Environmental Conditions

Conditions in the marine environment, such as, temperature and currents, influence the spatial distribution and migration patterns of marine species. Hence, understanding the link between environmental factors and fish behavior is crucial in predicting, e.g., how fish populations may respond to climate change.   Deriving this link is challenging because it requires analysis of two types of datasets (i) large environmental (currents, temperature) datasets that vary in space and time, and (ii) sparse and sporadic spatial observations of fish populations.

Project goal   

The primary goal of the project is to develop a methodology that helps predict how spatial distribution of two fish stocks (capelin and mackerel) change in response to variability in the physical marine environment (ocean currents and temperature).  The information can also be used to optimize data collection by minimizing time spent in spatial sampling of the populations.

The project will focus on the use of machine learning and/or causal inference algorithms.  As a first step, we use synthetic (fish and environmental) data from analytic models that couple the two data sources.  Because the ‘truth’ is known, we can judge the efficiency and error margins of the methodologies. We then apply the methodologies to real world (empirical) observations.

Advisors:  Tom Michoel , Sam Subbey . 

Towards precision medicine for cancer patient stratification

On average, a drug or a treatment is effective in only about half of patients who take it. This means patients need to try several until they find one that is effective at the cost of side effects associated with every treatment. The ultimate goal of precision medicine is to provide a treatment best suited for every individual. Sequencing technologies have now made genomics data available in abundance to be used towards this goal.

In this project we will specifically focus on cancer. Most cancer patients get a particular treatment based on the cancer type and the stage, though different individuals will react differently to a treatment. It is now well established that genetic mutations cause cancer growth and spreading and importantly, these mutations are different in individual patients. The aim of this project is use genomic data allow to better stratification of cancer patients, to predict the treatment most likely to work. Specifically, the project will use machine learning approach to integrate genomic data and build a classifier for stratification of cancer patients.

Advisor: Anagha Joshi

Unraveling gene regulation from single cell data

Multi-cellularity is achieved by precise control of gene expression during development and differentiation and aberrations of this process leads to disease. A key regulatory process in gene regulation is at the transcriptional level where epigenetic and transcriptional regulators control the spatial and temporal expression of the target genes in response to environmental, developmental, and physiological cues obtained from a signalling cascade. The rapid advances in sequencing technology has now made it feasible to study this process by understanding the genomewide patterns of diverse epigenetic and transcription factors as well as at a single cell level.

Single cell RNA sequencing is highly important, particularly in cancer as it allows exploration of heterogenous tumor sample, obstructing therapeutic targeting which leads to poor survival. Despite huge clinical relevance and potential, analysis of single cell RNA-seq data is challenging. In this project, we will develop strategies to infer gene regulatory networks using network inference approaches (both supervised and un-supervised). It will be primarily tested on the single cell datasets in the context of cancer.

Developing a Stress Granule Classifier

To carry out the multitude of functions 'expected' from a human cell, the cell employs a strategy of division of labour, whereby sub-cellular organelles carry out distinct functions. Thus we traditionally understand organelles as distinct units defined both functionally and physically with a distinct shape and size range. More recently a new class of organelles have been discovered that are assembled and dissolved on demand and are composed of liquid droplets or 'granules'. Granules show many properties characteristic of liquids, such as flow and wetting, but they can also assume many shapes and indeed also fluctuate in shape. One such liquid organelle is a stress granule (SG). 

Stress granules are pro-survival organelles that assemble in response to cellular stress and important in cancer and neurodegenerative diseases like Alzheimer's. They are liquid or gel-like and can assume varying sizes and shapes depending on their cellular composition. 

In a given experiment we are able to image the entire cell over a time series of 1000 frames; from which we extract a rough estimation of the size and shape of each granule. Our current method is susceptible to noise and a granule may be falsely rejected if the boundary is drawn poorly in a small majority of frames. Ideally, we would also like to identify potentially interesting features, such as voids, in the accepted granules.

We are interested in applying a machine learning approach to develop a descriptor for a 'classic' granule and furthermore classify them into different functional groups based on disease status of the cell. This method would be applied across thousands of granules imaged from control and disease cells. We are a multi-disciplinary group consisting of biologists, computational scientists and physicists. 

Advisors: Sushma Grellscheid , Carl Jones

Machine Learning based Hyperheuristic algorithm

Develop a Machine Learning based Hyper-heuristic algorithm to solve a pickup and delivery problem. A hyper-heuristic is a heuristics that choose heuristics automatically. Hyper-heuristic seeks to automate the process of selecting, combining, generating or adapting several simpler heuristics to efficiently solve computational search problems [Handbook of Metaheuristics]. There might be multiple heuristics for solving a problem. Heuristics have their own strength and weakness. In this project, we want to use machine-learning techniques to learn the strength and weakness of each heuristic while we are using them in an iterative search for finding high quality solutions and then use them intelligently for the rest of the search. Once a new information is gathered during the search the hyper-heuristic algorithm automatically adjusts the heuristics.

Advisor: Ahmad Hemmati

Machine learning for solving satisfiability problems and applications in cryptanalysis

Advisor: Igor Semaev

Hybrid modeling approaches for well drilling with Sintef

Several topics are available.

"Flow models" are first-principles models simulating the flow, temperature and pressure in a well being drilled. Our project is exploring "hybrid approaches" where these models are combined with machine learning models that either learn from time series data from flow model runs or from real-world measurements during drilling. The goal is to better detect drilling problems such as hole cleaning, make more accurate predictions and correctly learn from and interpret real-word data.

The "surrogate model" refers to  a ML model which learns to mimic the flow model by learning from the model inputs and outputs. Use cases for surrogate models include model predictions where speed is favoured over accuracy and exploration of parameter space.

Surrogate models with active Learning

While it is possible to produce a nearly unlimited amount of training data by running the flow model, the surrogate model may still perform poorly if it lacks training data in the part of the parameter space it operates in or if it "forgets" areas of the parameter space by being fed too much data from a narrow range of parameters.

The goal of this thesis is to build a surrogate model (with any architecture) for some restricted parameter range and implement an active learning approach where the ML requests more model runs from the flow model in the parts of the parameter space where it is needed the most. The end result should be a surrogate model that is quick and performs acceptably well over the whole defined parameter range.

Surrogate models trained via adversarial learning

How best to train surrogate models from runs of the flow model is an open question. This master thesis would use the adversarial learning approach to build a surrogate model which to its "adversary" becomes indistinguishable from the output of an actual flow model run.

GPU-based Surrogate models for parameter search

While CPU speed largely stalled 20 years ago in terms of working frequency on single cores, multi-core CPUs and especially GPUs took off and delivered increases in computational power by parallelizing computations.

Modern machine learning such as deep learning takes advantage this boom in computing power by running on GPUs.

The SINTEF flow models in contrast, are software programs that runs on a CPU and does not happen to utilize multi-core CPU functionality. The model runs advance time-step by time-step and each time step relies on the results from the previous time step. The flow models are therefore fundamentally sequential and not well suited to massive parallelization.

It is however of interest to run different model runs in parallel, to explore parameter spaces. The use cases for this includes model calibration, problem detection and hypothesis generation and testing.

The task of this thesis is to implement an ML-based surrogate model in such a way that many surrogate model outputs can be produced at the same time using a single GPU. This will likely entail some trade off with model size and maybe some coding tricks.

Uncertainty estimates of hybrid predictions (Lots of room for creativity, might need to steer it more, needs good background literature)

When using predictions from a ML model trained on time series data, it is useful to know if it's accurate or should be trusted. The student is challenged to develop hybrid approaches that incorporates estimates of uncertainty. Components could include reporting variance from ML ensembles trained on a diversity of time series data, implementation of conformal predictions, analysis of training data parameter ranges vs current input, etc. The output should be a "traffic light signal" roughly indicating the accuracy of the predictions.

Transfer learning approaches

We're assuming an ML model is to be used for time series prediction

It is possible to train an ML on a wide range of scenarios in the flow models, but we expect that to perform well, the model also needs to see model runs representative of the type of well and drilling operation it will be used in. In this thesis the student implements a transfer learning approach, where the model is trained on general model runs and fine-tuned on a most representative data set.

(Bonus1: implementing one-shot learning, Bonus2: Using real-world data in the fine-tuning stage)

ML capable of reframing situations

When a human oversees an operation like well drilling, she has a mental model of the situation and new data such as pressure readings from the well is interpreted in light of this model. This is referred to as "framing" and is the normal mode of work. However, when a problem occurs, it becomes harder to reconcile the data with the mental model. The human then goes into "reframing", building a new mental model that includes the ongoing problem. This can be seen as a process of hypothesis generation and testing.

A computer model however, lacks re-framing. A flow model will keep making predictions under the assumption of no problems and a separate alarm system will use the deviation between the model predictions and reality to raise an alarm. This is in a sense how all alarm systems work, but it means that the human must discard the computer model as a tool at the same time as she's handling a crisis.

The student is given access to a flow model and a surrogate model which can learn from model runs both with and without hole cleaning and is challenged to develop a hybrid approach where the ML+flow model continuously performs hypothesis generation and testing and is able to "switch" into predictions of  a hole cleaning problem and different remediations of this.

Advisor: Philippe Nivlet at Sintef together with advisor from UiB

Explainable AI at Equinor

In the project Machine Teaching for XAI (see  https://xai.w.uib.no ) a master thesis in collaboration between UiB and Equinor.

Advisor: One of Pekka Parviainen/Jan Arne Telle/Emmanuel Arrighi + Bjarte Johansen from Equinor.

Explainable AI at Eviny

In the project Machine Teaching for XAI (see  https://xai.w.uib.no ) a master thesis in collaboration between UiB and Eviny.

Advisor: One of Pekka Parviainen/Jan Arne Telle/Emmanuel Arrighi + Kristian Flikka from Eviny.

If you want to suggest your own topic, please contact Pekka Parviainen ,  Fabio Massimo Zennaro or Nello Blaser .

Graph

thesis ideas for machine learning

Analytics Insight

Top 10 Research and Thesis Topics for ML Projects in 2022

Avatar photo

This article features the top 10 research and thesis topics for ML projects for students to try in 2022

Text mining and text classification, image-based applications, machine vision, optimization, voice classification, sentiment analysis, recommendation framework project, mall customers’ project, object detection with deep learning.

Whatsapp Icon

Disclaimer: Any financial and crypto market information given on Analytics Insight are sponsored articles, written for informational purpose only and is not an investment advice. The readers are further advised that Crypto products and NFTs are unregulated and can be highly risky. There may be no regulatory recourse for any loss from such transactions. Conduct your own research by contacting financial experts before making any investment decisions. The decision to read hereinafter is purely a matter of choice and shall be construed as an express undertaking/guarantee in favour of Analytics Insight of being absolved from any/ all potential legal action, or enforceable claims. We do not represent nor own any cryptocurrency, any complaints, abuse or concerns with regards to the information provided shall be immediately informed here .

You May Also Like

thesis ideas for machine learning

Bitcoin Breaks Below $40,000 As SUI And CHZ Outperform, Celestia (TIA), Avalanche (AVAX) and Chainlink (LINK) Disappoint

thesis ideas for machine learning

The Must-Have Tech Skills That Help You Get Hired at Amazon

thesis ideas for machine learning

Data Science and Data Analysis: How Much Math Is Required?

Cybersecurity

Can AI Decision-Making Be Trusted for Cybersecurity?

thesis ideas for machine learning

Analytics Insight® is an influential platform dedicated to insights, trends, and opinion from the world of data-driven technologies. It monitors developments, recognition, and achievements made by Artificial Intelligence, Big Data and Analytics companies across the globe.

linkedin

  • Select Language:
  • Privacy Policy
  • Content Licensing
  • Terms & Conditions
  • Submit an Interview

Special Editions

  • Dec – Crypto Weekly Vol-1
  • 40 Under 40 Innovators
  • Women In Technology
  • Market Reports
  • AI Glossary
  • Infographics

Latest Issue

Magazine Issue January 2024

Disclaimer: Any financial and crypto market information given on Analytics Insight is written for informational purpose only and is not an investment advice. Conduct your own research by contacting financial experts before making any investment decisions, more information here .

Second Menu

Also, note that the cryptocurrencies mentioned/listed on the website could potentially be scams. i.e designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. you are resposible for conducting your ownresearch (DYOR) before making any investment.

CodeAvail

Exploring 250+ Machine Learning Research Topics

machine learning research topics

In recent years, machine learning has become super popular and grown very quickly. This happened because technology got better, and there’s a lot more data available. Because of this, we’ve seen lots of new and amazing things happen in different areas. Machine learning research is what makes all these cool things possible. In this blog, we’ll talk about machine learning research topics, why they’re important, how you can pick one, what areas are popular to study, what’s new and exciting, the tough problems, and where you can find help if you want to be a researcher.

Why Does Machine Learning Research Matter?

Table of Contents

Machine learning research is at the heart of the AI revolution. It underpins the development of intelligent systems capable of making predictions, automating tasks, and improving decision-making across industries. The importance of this research can be summarized as follows:

Advancements in Technology

The growth of machine learning research has led to the development of powerful algorithms, tools, and frameworks. Numerous industries, including healthcare, banking, autonomous cars, and natural language processing, have found use for these technology.

As researchers continue to push the boundaries of what’s possible, we can expect even more transformative technologies to emerge.

Real-world Applications

Machine learning research has brought about tangible changes in our daily lives. Voice assistants like Siri and Alexa, recommendation systems on streaming platforms, and personalized healthcare diagnostics are just a few examples of how this research impacts our world. 

By working on new research topics, scientists can further refine these applications and create new ones.

Economic and Industrial Impacts

The economic implications of machine learning research are substantial. Companies that harness the power of machine learning gain a competitive edge in the market. 

This creates a demand for skilled machine learning researchers, driving job opportunities and contributing to economic growth.

How to Choose the Machine Learning Research Topics?

Selecting the right machine learning research topics is crucial for your success as a machine learning researcher. Here’s a guide to help you make an informed decision:

  • Understanding Your Interests

Start by considering your personal interests. Machine learning is a broad field with applications in virtually every sector. By choosing a topic that aligns with your passions, you’ll stay motivated and engaged throughout your research journey.

  • Reviewing Current Trends

Stay updated on the latest trends in machine learning. Attend conferences, read research papers, and engage with the community to identify emerging research topics. Current trends often lead to exciting breakthroughs.

  • Identifying Gaps in Existing Research

Sometimes, the most promising research topics involve addressing gaps in existing knowledge. These gaps may become evident through your own experiences, discussions with peers, or in the course of your studies.

  • Collaborating with Experts

Collaboration is key in research. Working with experts in the field can help you refine your research topic and gain valuable insights. Seek mentors and collaborators who can guide you.

250+ Machine Learning Research Topics: Category-wise

Supervised learning.

  • Explainable AI for Decision Support
  • Few-shot Learning Methods
  • Time Series Forecasting with Deep Learning
  • Handling Imbalanced Datasets in Classification
  • Regression Techniques for Non-linear Data
  • Transfer Learning in Supervised Settings
  • Multi-label Classification Strategies
  • Semi-Supervised Learning Approaches
  • Novel Feature Selection Methods
  • Anomaly Detection in Supervised Scenarios
  • Federated Learning for Distributed Supervised Models
  • Ensemble Learning for Improved Accuracy
  • Automated Hyperparameter Tuning
  • Ethical Implications in Supervised Models
  • Interpretability of Deep Neural Networks.

Unsupervised Learning

  • Unsupervised Clustering of High-dimensional Data
  • Semi-Supervised Clustering Approaches
  • Density Estimation in Unsupervised Learning
  • Anomaly Detection in Unsupervised Settings
  • Transfer Learning for Unsupervised Tasks
  • Representation Learning in Unsupervised Learning
  • Outlier Detection Techniques
  • Generative Models for Data Synthesis
  • Manifold Learning in High-dimensional Spaces
  • Unsupervised Feature Selection
  • Privacy-Preserving Unsupervised Learning
  • Community Detection in Complex Networks
  • Clustering Interpretability and Visualization
  • Unsupervised Learning for Image Segmentation
  • Autoencoders for Dimensionality Reduction.

Reinforcement Learning

  • Deep Reinforcement Learning in Real-world Applications
  • Safe Reinforcement Learning for Autonomous Systems
  • Transfer Learning in Reinforcement Learning
  • Imitation Learning and Apprenticeship Learning
  • Multi-agent Reinforcement Learning
  • Explainable Reinforcement Learning Policies
  • Hierarchical Reinforcement Learning
  • Model-based Reinforcement Learning
  • Curriculum Learning in Reinforcement Learning
  • Reinforcement Learning in Robotics
  • Exploration vs. Exploitation Strategies
  • Reward Function Design and Ethical Considerations
  • Reinforcement Learning in Healthcare
  • Continuous Action Spaces in RL
  • Reinforcement Learning for Resource Management.

Natural Language Processing (NLP)

  • Multilingual and Cross-lingual NLP
  • Contextualized Word Embeddings
  • Bias Detection and Mitigation in NLP
  • Named Entity Recognition for Low-resource Languages
  • Sentiment Analysis in Social Media Text
  • Dialogue Systems for Improved Customer Service
  • Text Summarization for News Articles
  • Low-resource Machine Translation
  • Explainable NLP Models
  • Coreference Resolution in NLP
  • Question Answering in Specific Domains
  • Detecting Fake News and Misinformation
  • NLP for Healthcare: Clinical Document Understanding
  • Emotion Analysis in Text
  • Text Generation with Controlled Attributes.

Computer Vision

  • Video Action Recognition and Event Detection
  • Object Detection in Challenging Conditions (e.g., low light)
  • Explainable Computer Vision Models
  • Image Captioning for Accessibility
  • Large-scale Image Retrieval
  • Domain Adaptation in Computer Vision
  • Fine-grained Image Classification
  • Facial Expression Recognition
  • Visual Question Answering
  • Self-supervised Learning for Visual Representations
  • Weakly Supervised Object Localization
  • Human Pose Estimation in 3D
  • Scene Understanding in Autonomous Vehicles
  • Image Super-resolution
  • Gaze Estimation for Human-Computer Interaction.

Deep Learning

  • Neural Architecture Search for Efficient Models
  • Self-attention Mechanisms and Transformers
  • Interpretability in Deep Learning Models
  • Robustness of Deep Neural Networks
  • Generative Adversarial Networks (GANs) for Data Augmentation
  • Neural Style Transfer in Art and Design
  • Adversarial Attacks and Defenses
  • Neural Networks for Audio and Speech Processing
  • Explainable AI for Healthcare Diagnosis
  • Automated Machine Learning (AutoML)
  • Reinforcement Learning with Deep Neural Networks
  • Model Compression and Quantization
  • Lifelong Learning with Deep Learning Models
  • Multimodal Learning with Vision and Language
  • Federated Learning for Privacy-preserving Deep Learning.

Explainable AI

  • Visualizing Model Decision Boundaries
  • Saliency Maps and Feature Attribution
  • Rule-based Explanations for Black-box Models
  • Contrastive Explanations for Model Interpretability
  • Counterfactual Explanations and What-if Analysis
  • Human-centered AI for Explainable Healthcare
  • Ethics and Fairness in Explainable AI
  • Explanation Generation for Natural Language Processing
  • Explainable AI in Financial Risk Assessment
  • User-friendly Interfaces for Model Interpretability
  • Scalability and Efficiency in Explainable Models
  • Hybrid Models for Combined Accuracy and Explainability
  • Post-hoc vs. Intrinsic Explanations
  • Evaluation Metrics for Explanation Quality
  • Explainable AI for Autonomous Vehicles.

Transfer Learning

  • Zero-shot Learning and Few-shot Learning
  • Cross-domain Transfer Learning
  • Domain Adaptation for Improved Generalization
  • Multilingual Transfer Learning in NLP
  • Pretraining and Fine-tuning Techniques
  • Lifelong Learning and Continual Learning
  • Domain-specific Transfer Learning Applications
  • Model Distillation for Knowledge Transfer
  • Contrastive Learning for Transfer Learning
  • Self-training and Pseudo-labeling
  • Dynamic Adaption of Pretrained Models
  • Privacy-Preserving Transfer Learning
  • Unsupervised Domain Adaptation
  • Negative Transfer Avoidance in Transfer Learning.

Federated Learning

  • Secure Aggregation in Federated Learning
  • Communication-efficient Federated Learning
  • Privacy-preserving Techniques in Federated Learning
  • Federated Transfer Learning
  • Heterogeneous Federated Learning
  • Real-world Applications of Federated Learning
  • Federated Learning for Edge Devices
  • Federated Learning for Healthcare Data
  • Differential Privacy in Federated Learning
  • Byzantine-robust Federated Learning
  • Federated Learning with Non-IID Data
  • Model Selection in Federated Learning
  • Scalable Federated Learning for Large Datasets
  • Client Selection and Sampling Strategies
  • Global Model Update Synchronization in Federated Learning.

Quantum Machine Learning

  • Quantum Neural Networks and Quantum Circuit Learning
  • Quantum-enhanced Optimization for Machine Learning
  • Quantum Data Compression and Quantum Principal Component Analysis
  • Quantum Kernels and Quantum Feature Maps
  • Quantum Variational Autoencoders
  • Quantum Transfer Learning
  • Quantum-inspired Classical Algorithms for ML
  • Hybrid Quantum-Classical Models
  • Quantum Machine Learning on Near-term Quantum Devices
  • Quantum-inspired Reinforcement Learning
  • Quantum Computing for Quantum Chemistry and Drug Discovery
  • Quantum Machine Learning for Finance
  • Quantum Data Structures and Quantum Databases
  • Quantum-enhanced Cryptography in Machine Learning
  • Quantum Generative Models and Quantum GANs.

Ethical AI and Bias Mitigation

  • Fairness-aware Machine Learning Algorithms
  • Bias Detection and Mitigation in Real-world Data
  • Explainable AI for Ethical Decision Support
  • Algorithmic Accountability and Transparency
  • Privacy-preserving AI and Data Governance
  • Ethical Considerations in AI for Healthcare
  • Fairness in Recommender Systems
  • Bias and Fairness in NLP Models
  • Auditing AI Systems for Bias
  • Societal Implications of AI in Criminal Justice
  • Ethical AI Education and Training
  • Bias Mitigation in Autonomous Vehicles
  • Fair AI in Financial and Hiring Decisions
  • Case Studies in Ethical AI Failures
  • Legal and Policy Frameworks for Ethical AI.

Meta-Learning and AutoML

  • Neural Architecture Search (NAS) for Efficient Models
  • Transfer Learning in NAS
  • Reinforcement Learning for NAS
  • Multi-objective NAS
  • Automated Data Augmentation
  • Neural Architecture Optimization for Edge Devices
  • Bayesian Optimization for AutoML
  • Model Compression and Quantization in AutoML
  • AutoML for Federated Learning
  • AutoML in Healthcare Diagnostics
  • Explainable AutoML
  • Cost-sensitive Learning in AutoML
  • AutoML for Small Data
  • Human-in-the-Loop AutoML.

AI for Healthcare and Medicine

  • Disease Prediction and Early Diagnosis
  • Medical Image Analysis with Deep Learning
  • Drug Discovery and Molecular Modeling
  • Electronic Health Record Analysis
  • Predictive Analytics in Healthcare
  • Personalized Treatment Planning
  • Healthcare Fraud Detection
  • Telemedicine and Remote Patient Monitoring
  • AI in Radiology and Pathology
  • AI in Drug Repurposing
  • AI for Medical Robotics and Surgery
  • Genomic Data Analysis
  • AI-powered Mental Health Assessment
  • Explainable AI in Healthcare Decision Support
  • AI in Epidemiology and Outbreak Prediction.

AI in Finance and Investment

  • Algorithmic Trading and High-frequency Trading
  • Credit Scoring and Risk Assessment
  • Fraud Detection and Anti-money Laundering
  • Portfolio Optimization with AI
  • Financial Market Prediction
  • Sentiment Analysis in Financial News
  • Explainable AI in Financial Decision-making
  • Algorithmic Pricing and Dynamic Pricing Strategies
  • AI in Cryptocurrency and Blockchain
  • Customer Behavior Analysis in Banking
  • Explainable AI in Credit Decisioning
  • AI in Regulatory Compliance
  • Ethical AI in Financial Services
  • AI for Real Estate Investment
  • Automated Financial Reporting.

AI in Climate Change and Sustainability

  • Climate Modeling and Prediction
  • Renewable Energy Forecasting
  • Smart Grid Optimization
  • Energy Consumption Forecasting
  • Carbon Emission Reduction with AI
  • Ecosystem Monitoring and Preservation
  • Precision Agriculture with AI
  • AI for Wildlife Conservation
  • Natural Disaster Prediction and Management
  • Water Resource Management with AI
  • Sustainable Transportation and Urban Planning
  • Climate Change Mitigation Strategies with AI
  • Environmental Impact Assessment with Machine Learning
  • Eco-friendly Supply Chain Optimization
  • Ethical AI in Climate-related Decision Support.

Data Privacy and Security

  • Differential Privacy Mechanisms
  • Federated Learning for Privacy-preserving AI
  • Secure Multi-Party Computation
  • Privacy-enhancing Technologies in Machine Learning
  • Homomorphic Encryption for Machine Learning
  • Ethical Considerations in Data Privacy
  • Privacy-preserving AI in Healthcare
  • AI for Secure Authentication and Access Control
  • Blockchain and AI for Data Security
  • Explainable Privacy in Machine Learning
  • Privacy-preserving AI in Government and Public Services
  • Privacy-compliant AI for IoT and Edge Devices
  • Secure AI Models Sharing and Deployment
  • Privacy-preserving AI in Financial Transactions
  • AI in the Legal Frameworks of Data Privacy.

Global Collaboration in Research

  • International Research Partnerships and Collaboration Models
  • Multilingual and Cross-cultural AI Research
  • Addressing Global Healthcare Challenges with AI
  • Ethical Considerations in International AI Collaborations
  • Interdisciplinary AI Research in Global Challenges
  • AI Ethics and Human Rights in Global Research
  • Data Sharing and Data Access in Global AI Research
  • Cross-border Research Regulations and Compliance
  • AI Innovation Hubs and International Research Centers
  • AI Education and Training for Global Communities
  • Humanitarian AI and AI for Sustainable Development Goals
  • AI for Cultural Preservation and Heritage Protection
  • Collaboration in AI-related Global Crises
  • AI in Cross-cultural Communication and Understanding
  • Global AI for Environmental Sustainability and Conservation.

Emerging Trends and Hot Topics in Machine Learning Research

The landscape of machine learning research topics is constantly evolving. Here are some of the emerging trends and hot topics that are shaping the field:

As AI systems become more prevalent, addressing ethical concerns and mitigating bias in algorithms are critical research areas.

Interpretable and Explainable Models

Understanding why machine learning models make specific decisions is crucial for their adoption in sensitive areas, such as healthcare and finance.

Meta-learning algorithms are designed to enable machines to learn how to learn, while AutoML aims to automate the machine learning process itself.

Machine learning is revolutionizing the healthcare sector, from diagnostic tools to drug discovery and patient care.

Algorithmic trading, risk assessment, and fraud detection are just a few applications of AI in finance, creating a wealth of research opportunities.

Machine learning research is crucial in analyzing and mitigating the impacts of climate change and promoting sustainable practices.

Challenges and Future Directions

While machine learning research has made tremendous strides, it also faces several challenges:

  • Data Privacy and Security: As machine learning models require vast amounts of data, protecting individual privacy and data security are paramount concerns.
  • Scalability and Efficiency: Developing efficient algorithms that can handle increasingly large datasets and complex computations remains a challenge.
  • Ensuring Fairness and Transparency: Addressing bias in machine learning models and making their decisions transparent is essential for equitable AI systems.
  • Quantum Computing and Machine Learning: The integration of quantum computing and machine learning has the potential to revolutionize the field, but it also presents unique challenges.
  • Global Collaboration in Research: Machine learning research benefits from collaboration on a global scale. Ensuring that researchers from diverse backgrounds work together is vital for progress.

Resources for Machine Learning Researchers

If you’re looking to embark on a journey in machine learning research topics, there are various resources at your disposal:

  • Journals and Conferences

Journals such as the “Journal of Machine Learning Research” and conferences like NeurIPS and ICML provide a platform for publishing and discussing research findings.

  • Online Communities and Forums

Platforms like Stack Overflow, GitHub, and dedicated forums for machine learning provide spaces for collaboration and problem-solving.

  • Datasets and Tools

Open-source datasets and tools like TensorFlow and PyTorch simplify the research process by providing access to data and pre-built models.

  • Research Grants and Funding Opportunities

Many organizations and government agencies offer research grants and funding for machine learning projects. Seek out these opportunities to support your research.

Machine learning research is like a superhero in the world of technology. To be a part of this exciting journey, it’s important to choose the right machine learning research topics and keep up with the latest trends.

Machine learning research makes our lives better. It powers things like smart assistants and life-saving medical tools. It’s like the force driving the future of technology and society.

But, there are challenges too. We need to work together and be ethical in our research. Everyone should benefit from this technology. The future of machine learning research is incredibly bright. If you want to be a part of it, get ready for an exciting adventure. You can help create new solutions and make a big impact on the world.

Related Posts

Tips on How To Tackle A Machine Learning Project As A Beginner

Tips on How To Tackle A Machine Learning Project As A Beginner

Here in this blog, CodeAvail experts will explain to you tips on how to tackle a machine learning project as a beginner step by step…

Artificial Intelligence and Machine Learning Basics for Beginners

Artificial Intelligence and Machine Learning Basics for Beginners

Here in this blog, CodeAvail experts will explain to you Artificial Intelligence and Machine Learning basics for beginners in detail step by step. What is…

Machine Learning - CMU

PhD Dissertations

PhD Dissertations

[all are .pdf files].

Reliable and Practical Machine Learning for Dynamic Healthcare Settings Helen Zhou, 2023

Automatic customization of large-scale spiking network models to neuronal population activity (unavailable) Shenghao Wu, 2023

Estimation of BVk functions from scattered data (unavailable) Addison J. Hu, 2023

Rethinking object categorization in computer vision (unavailable) Jayanth Koushik, 2023

Advances in Statistical Gene Networks Jinjin Tian, 2023 Post-hoc calibration without distributional assumptions Chirag Gupta, 2023

The Role of Noise, Proxies, and Dynamics in Algorithmic Fairness Nil-Jana Akpinar, 2023

Collaborative learning by leveraging siloed data Sebastian Caldas, 2023

Modeling Epidemiological Time Series Aaron Rumack, 2023

Human-Centered Machine Learning: A Statistical and Algorithmic Perspective Leqi Liu, 2023

Uncertainty Quantification under Distribution Shifts Aleksandr Podkopaev, 2023

Probabilistic Reinforcement Learning: Using Data to Define Desired Outcomes, and Inferring How to Get There Benjamin Eysenbach, 2023

Comparing Forecasters and Abstaining Classifiers Yo Joong Choe, 2023

Using Task Driven Methods to Uncover Representations of Human Vision and Semantics Aria Yuan Wang, 2023

Data-driven Decisions - An Anomaly Detection Perspective Shubhranshu Shekhar, 2023

Applied Mathematics of the Future Kin G. Olivares, 2023

METHODS AND APPLICATIONS OF EXPLAINABLE MACHINE LEARNING Joon Sik Kim, 2023

NEURAL REASONING FOR QUESTION ANSWERING Haitian Sun, 2023

Principled Machine Learning for Societally Consequential Decision Making Amanda Coston, 2023

Long term brain dynamics extend cognitive neuroscience to timescales relevant for health and physiology Maxwell B. Wang

Long term brain dynamics extend cognitive neuroscience to timescales relevant for health and physiology Darby M. Losey, 2023

Calibrated Conditional Density Models and Predictive Inference via Local Diagnostics David Zhao, 2023

Towards an Application-based Pipeline for Explainability Gregory Plumb, 2022

Objective Criteria for Explainable Machine Learning Chih-Kuan Yeh, 2022

Making Scientific Peer Review Scientific Ivan Stelmakh, 2022

Facets of regularization in high-dimensional learning: Cross-validation, risk monotonization, and model complexity Pratik Patil, 2022

Active Robot Perception using Programmable Light Curtains Siddharth Ancha, 2022

Strategies for Black-Box and Multi-Objective Optimization Biswajit Paria, 2022

Unifying State and Policy-Level Explanations for Reinforcement Learning Nicholay Topin, 2022

Sensor Fusion Frameworks for Nowcasting Maria Jahja, 2022

Equilibrium Approaches to Modern Deep Learning Shaojie Bai, 2022

Towards General Natural Language Understanding with Probabilistic Worldbuilding Abulhair Saparov, 2022

Applications of Point Process Modeling to Spiking Neurons (Unavailable) Yu Chen, 2021

Neural variability: structure, sources, control, and data augmentation Akash Umakantha, 2021

Structure and time course of neural population activity during learning Jay Hennig, 2021

Cross-view Learning with Limited Supervision Yao-Hung Hubert Tsai, 2021

Meta Reinforcement Learning through Memory Emilio Parisotto, 2021

Learning Embodied Agents with Scalably-Supervised Reinforcement Learning Lisa Lee, 2021

Learning to Predict and Make Decisions under Distribution Shift Yifan Wu, 2021

Statistical Game Theory Arun Sai Suggala, 2021

Towards Knowledge-capable AI: Agents that See, Speak, Act and Know Kenneth Marino, 2021

Learning and Reasoning with Fast Semidefinite Programming and Mixing Methods Po-Wei Wang, 2021

Bridging Language in Machines with Language in the Brain Mariya Toneva, 2021

Curriculum Learning Otilia Stretcu, 2021

Principles of Learning in Multitask Settings: A Probabilistic Perspective Maruan Al-Shedivat, 2021

Towards Robust and Resilient Machine Learning Adarsh Prasad, 2021

Towards Training AI Agents with All Types of Experiences: A Unified ML Formalism Zhiting Hu, 2021

Building Intelligent Autonomous Navigation Agents Devendra Chaplot, 2021

Learning to See by Moving: Self-supervising 3D Scene Representations for Perception, Control, and Visual Reasoning Hsiao-Yu Fish Tung, 2021

Statistical Astrophysics: From Extrasolar Planets to the Large-scale Structure of the Universe Collin Politsch, 2020

Causal Inference with Complex Data Structures and Non-Standard Effects Kwhangho Kim, 2020

Networks, Point Processes, and Networks of Point Processes Neil Spencer, 2020

Dissecting neural variability using population recordings, network models, and neurofeedback (Unavailable) Ryan Williamson, 2020

Predicting Health and Safety: Essays in Machine Learning for Decision Support in the Public Sector Dylan Fitzpatrick, 2020

Towards a Unified Framework for Learning and Reasoning Han Zhao, 2020

Learning DAGs with Continuous Optimization Xun Zheng, 2020

Machine Learning and Multiagent Preferences Ritesh Noothigattu, 2020

Learning and Decision Making from Diverse Forms of Information Yichong Xu, 2020

Towards Data-Efficient Machine Learning Qizhe Xie, 2020

Change modeling for understanding our world and the counterfactual one(s) William Herlands, 2020

Machine Learning in High-Stakes Settings: Risks and Opportunities Maria De-Arteaga, 2020

Data Decomposition for Constrained Visual Learning Calvin Murdock, 2020

Structured Sparse Regression Methods for Learning from High-Dimensional Genomic Data Micol Marchetti-Bowick, 2020

Towards Efficient Automated Machine Learning Liam Li, 2020

LEARNING COLLECTIONS OF FUNCTIONS Emmanouil Antonios Platanios, 2020

Provable, structured, and efficient methods for robustness of deep networks to adversarial examples Eric Wong , 2020

Reconstructing and Mining Signals: Algorithms and Applications Hyun Ah Song, 2020

Probabilistic Single Cell Lineage Tracing Chieh Lin, 2020

Graphical network modeling of phase coupling in brain activity (unavailable) Josue Orellana, 2019

Strategic Exploration in Reinforcement Learning - New Algorithms and Learning Guarantees Christoph Dann, 2019 Learning Generative Models using Transformations Chun-Liang Li, 2019

Estimating Probability Distributions and their Properties Shashank Singh, 2019

Post-Inference Methods for Scalable Probabilistic Modeling and Sequential Decision Making Willie Neiswanger, 2019

Accelerating Text-as-Data Research in Computational Social Science Dallas Card, 2019

Multi-view Relationships for Analytics and Inference Eric Lei, 2019

Information flow in networks based on nonstationary multivariate neural recordings Natalie Klein, 2019

Competitive Analysis for Machine Learning & Data Science Michael Spece, 2019

The When, Where and Why of Human Memory Retrieval Qiong Zhang, 2019

Towards Effective and Efficient Learning at Scale Adams Wei Yu, 2019

Towards Literate Artificial Intelligence Mrinmaya Sachan, 2019

Learning Gene Networks Underlying Clinical Phenotypes Under SNP Perturbations From Genome-Wide Data Calvin McCarter, 2019

Unified Models for Dynamical Systems Carlton Downey, 2019

Anytime Prediction and Learning for the Balance between Computation and Accuracy Hanzhang Hu, 2019

Statistical and Computational Properties of Some "User-Friendly" Methods for High-Dimensional Estimation Alnur Ali, 2019

Nonparametric Methods with Total Variation Type Regularization Veeranjaneyulu Sadhanala, 2019

New Advances in Sparse Learning, Deep Networks, and Adversarial Learning: Theory and Applications Hongyang Zhang, 2019

Gradient Descent for Non-convex Problems in Modern Machine Learning Simon Shaolei Du, 2019

Selective Data Acquisition in Learning and Decision Making Problems Yining Wang, 2019

Anomaly Detection in Graphs and Time Series: Algorithms and Applications Bryan Hooi, 2019

Neural dynamics and interactions in the human ventral visual pathway Yuanning Li, 2018

Tuning Hyperparameters without Grad Students: Scaling up Bandit Optimisation Kirthevasan Kandasamy, 2018

Teaching Machines to Classify from Natural Language Interactions Shashank Srivastava, 2018

Statistical Inference for Geometric Data Jisu Kim, 2018

Representation Learning @ Scale Manzil Zaheer, 2018

Diversity-promoting and Large-scale Machine Learning for Healthcare Pengtao Xie, 2018

Distribution and Histogram (DIsH) Learning Junier Oliva, 2018

Stress Detection for Keystroke Dynamics Shing-Hon Lau, 2018

Sublinear-Time Learning and Inference for High-Dimensional Models Enxu Yan, 2018

Neural population activity in the visual cortex: Statistical methods and application Benjamin Cowley, 2018

Efficient Methods for Prediction and Control in Partially Observable Environments Ahmed Hefny, 2018

Learning with Staleness Wei Dai, 2018

Statistical Approach for Functionally Validating Transcription Factor Bindings Using Population SNP and Gene Expression Data Jing Xiang, 2017

New Paradigms and Optimality Guarantees in Statistical Learning and Estimation Yu-Xiang Wang, 2017

Dynamic Question Ordering: Obtaining Useful Information While Reducing User Burden Kirstin Early, 2017

New Optimization Methods for Modern Machine Learning Sashank J. Reddi, 2017

Active Search with Complex Actions and Rewards Yifei Ma, 2017

Why Machine Learning Works George D. Montañez , 2017

Source-Space Analyses in MEG/EEG and Applications to Explore Spatio-temporal Neural Dynamics in Human Vision Ying Yang , 2017

Computational Tools for Identification and Analysis of Neuronal Population Activity Pengcheng Zhou, 2016

Expressive Collaborative Music Performance via Machine Learning Gus (Guangyu) Xia, 2016

Supervision Beyond Manual Annotations for Learning Visual Representations Carl Doersch, 2016

Exploring Weakly Labeled Data Across the Noise-Bias Spectrum Robert W. H. Fisher, 2016

Optimizing Optimization: Scalable Convex Programming with Proximal Operators Matt Wytock, 2016

Combining Neural Population Recordings: Theory and Application William Bishop, 2015

Discovering Compact and Informative Structures through Data Partitioning Madalina Fiterau-Brostean, 2015

Machine Learning in Space and Time Seth R. Flaxman, 2015

The Time and Location of Natural Reading Processes in the Brain Leila Wehbe, 2015

Shape-Constrained Estimation in High Dimensions Min Xu, 2015

Spectral Probabilistic Modeling and Applications to Natural Language Processing Ankur Parikh, 2015 Computational and Statistical Advances in Testing and Learning Aaditya Kumar Ramdas, 2015

Corpora and Cognition: The Semantic Composition of Adjectives and Nouns in the Human Brain Alona Fyshe, 2015

Learning Statistical Features of Scene Images Wooyoung Lee, 2014

Towards Scalable Analysis of Images and Videos Bin Zhao, 2014

Statistical Text Analysis for Social Science Brendan T. O'Connor, 2014

Modeling Large Social Networks in Context Qirong Ho, 2014

Semi-Cooperative Learning in Smart Grid Agents Prashant P. Reddy, 2013

On Learning from Collective Data Liang Xiong, 2013

Exploiting Non-sequence Data in Dynamic Model Learning Tzu-Kuo Huang, 2013

Mathematical Theories of Interaction with Oracles Liu Yang, 2013

Short-Sighted Probabilistic Planning Felipe W. Trevizan, 2013

Statistical Models and Algorithms for Studying Hand and Finger Kinematics and their Neural Mechanisms Lucia Castellanos, 2013

Approximation Algorithms and New Models for Clustering and Learning Pranjal Awasthi, 2013

Uncovering Structure in High-Dimensions: Networks and Multi-task Learning Problems Mladen Kolar, 2013

Learning with Sparsity: Structures, Optimization and Applications Xi Chen, 2013

GraphLab: A Distributed Abstraction for Large Scale Machine Learning Yucheng Low, 2013

Graph Structured Normal Means Inference James Sharpnack, 2013 (Joint Statistics & ML PhD)

Probabilistic Models for Collecting, Analyzing, and Modeling Expression Data Hai-Son Phuoc Le, 2013

Learning Large-Scale Conditional Random Fields Joseph K. Bradley, 2013

New Statistical Applications for Differential Privacy Rob Hall, 2013 (Joint Statistics & ML PhD)

Parallel and Distributed Systems for Probabilistic Reasoning Joseph Gonzalez, 2012

Spectral Approaches to Learning Predictive Representations Byron Boots, 2012

Attribute Learning using Joint Human and Machine Computation Edith L. M. Law, 2012

Statistical Methods for Studying Genetic Variation in Populations Suyash Shringarpure, 2012

Data Mining Meets HCI: Making Sense of Large Graphs Duen Horng (Polo) Chau, 2012

Learning with Limited Supervision by Input and Output Coding Yi Zhang, 2012

Target Sequence Clustering Benjamin Shih, 2011

Nonparametric Learning in High Dimensions Han Liu, 2010 (Joint Statistics & ML PhD)

Structural Analysis of Large Networks: Observations and Applications Mary McGlohon, 2010

Modeling Purposeful Adaptive Behavior with the Principle of Maximum Causal Entropy Brian D. Ziebart, 2010

Tractable Algorithms for Proximity Search on Large Graphs Purnamrita Sarkar, 2010

Rare Category Analysis Jingrui He, 2010

Coupled Semi-Supervised Learning Andrew Carlson, 2010

Fast Algorithms for Querying and Mining Large Graphs Hanghang Tong, 2009

Efficient Matrix Models for Relational Learning Ajit Paul Singh, 2009

Exploiting Domain and Task Regularities for Robust Named Entity Recognition Andrew O. Arnold, 2009

Theoretical Foundations of Active Learning Steve Hanneke, 2009

Generalized Learning Factors Analysis: Improving Cognitive Models with Machine Learning Hao Cen, 2009

Detecting Patterns of Anomalies Kaustav Das, 2009

Dynamics of Large Networks Jurij Leskovec, 2008

Computational Methods for Analyzing and Modeling Gene Regulation Dynamics Jason Ernst, 2008

Stacked Graphical Learning Zhenzhen Kou, 2007

Actively Learning Specific Function Properties with Applications to Statistical Inference Brent Bryan, 2007

Approximate Inference, Structure Learning and Feature Estimation in Markov Random Fields Pradeep Ravikumar, 2007

Scalable Graphical Models for Social Networks Anna Goldenberg, 2007

Measure Concentration of Strongly Mixing Processes with Applications Leonid Kontorovich, 2007

Tools for Graph Mining Deepayan Chakrabarti, 2005

Automatic Discovery of Latent Variable Models Ricardo Silva, 2005

thesis ideas for machine learning

youtube logo

The Future of AI Research: 20 Thesis Ideas for Undergraduate Students in Machine Learning and Deep Learning for 2023!

A comprehensive guide for crafting an original and innovative thesis in the field of ai..

By Aarafat Islam

“The beauty of machine learning is that it can be applied to any problem you want to solve, as long as you can provide the computer with enough examples.” — Andrew Ng

This article provides a list of 20 potential thesis ideas for an undergraduate program in machine learning and deep learning in 2023. Each thesis idea includes an  introduction , which presents a brief overview of the topic and the  research objectives . The ideas provided are related to different areas of machine learning and deep learning, such as computer vision, natural language processing, robotics, finance, drug discovery, and more. The article also includes explanations, examples, and conclusions for each thesis idea, which can help guide the research and provide a clear understanding of the potential contributions and outcomes of the proposed research. The article also emphasized the importance of originality and the need for proper citation in order to avoid plagiarism.

1. Investigating the use of Generative Adversarial Networks (GANs) in medical imaging:  A deep learning approach to improve the accuracy of medical diagnoses.

Introduction:  Medical imaging is an important tool in the diagnosis and treatment of various medical conditions. However, accurately interpreting medical images can be challenging, especially for less experienced doctors. This thesis aims to explore the use of GANs in medical imaging, in order to improve the accuracy of medical diagnoses.

2. Exploring the use of deep learning in natural language generation (NLG): An analysis of the current state-of-the-art and future potential.

Introduction:  Natural language generation is an important field in natural language processing (NLP) that deals with creating human-like text automatically. Deep learning has shown promising results in NLP tasks such as machine translation, sentiment analysis, and question-answering. This thesis aims to explore the use of deep learning in NLG and analyze the current state-of-the-art models, as well as potential future developments.

3. Development and evaluation of deep reinforcement learning (RL) for robotic navigation and control.

Introduction:  Robotic navigation and control are challenging tasks, which require a high degree of intelligence and adaptability. Deep RL has shown promising results in various robotics tasks, such as robotic arm control, autonomous navigation, and manipulation. This thesis aims to develop and evaluate a deep RL-based approach for robotic navigation and control and evaluate its performance in various environments and tasks.

4. Investigating the use of deep learning for drug discovery and development.

Introduction:  Drug discovery and development is a time-consuming and expensive process, which often involves high failure rates. Deep learning has been used to improve various tasks in bioinformatics and biotechnology, such as protein structure prediction and gene expression analysis. This thesis aims to investigate the use of deep learning for drug discovery and development and examine its potential to improve the efficiency and accuracy of the drug development process.

5. Comparison of deep learning and traditional machine learning methods for anomaly detection in time series data.

Introduction:  Anomaly detection in time series data is a challenging task, which is important in various fields such as finance, healthcare, and manufacturing. Deep learning methods have been used to improve anomaly detection in time series data, while traditional machine learning methods have been widely used as well. This thesis aims to compare deep learning and traditional machine learning methods for anomaly detection in time series data and examine their respective strengths and weaknesses.

thesis ideas for machine learning

Photo by  Joanna Kosinska  on  Unsplash

6. Use of deep transfer learning in speech recognition and synthesis.

Introduction:  Speech recognition and synthesis are areas of natural language processing that focus on converting spoken language to text and vice versa. Transfer learning has been widely used in deep learning-based speech recognition and synthesis systems to improve their performance by reusing the features learned from other tasks. This thesis aims to investigate the use of transfer learning in speech recognition and synthesis and how it improves the performance of the system in comparison to traditional methods.

7. The use of deep learning for financial prediction.

Introduction:  Financial prediction is a challenging task that requires a high degree of intelligence and adaptability, especially in the field of stock market prediction. Deep learning has shown promising results in various financial prediction tasks, such as stock price prediction and credit risk analysis. This thesis aims to investigate the use of deep learning for financial prediction and examine its potential to improve the accuracy of financial forecasting.

8. Investigating the use of deep learning for computer vision in agriculture.

Introduction:  Computer vision has the potential to revolutionize the field of agriculture by improving crop monitoring, precision farming, and yield prediction. Deep learning has been used to improve various computer vision tasks, such as object detection, semantic segmentation, and image classification. This thesis aims to investigate the use of deep learning for computer vision in agriculture and examine its potential to improve the efficiency and accuracy of crop monitoring and precision farming.

9. Development and evaluation of deep learning models for generative design in engineering and architecture.

Introduction:  Generative design is a powerful tool in engineering and architecture that can help optimize designs and reduce human error. Deep learning has been used to improve various generative design tasks, such as design optimization and form generation. This thesis aims to develop and evaluate deep learning models for generative design in engineering and architecture and examine their potential to improve the efficiency and accuracy of the design process.

10. Investigating the use of deep learning for natural language understanding.

Introduction:  Natural language understanding is a complex task of natural language processing that involves extracting meaning from text. Deep learning has been used to improve various NLP tasks, such as machine translation, sentiment analysis, and question-answering. This thesis aims to investigate the use of deep learning for natural language understanding and examine its potential to improve the efficiency and accuracy of natural language understanding systems.

thesis ideas for machine learning

Photo by  UX Indonesia  on  Unsplash

11. Comparing deep learning and traditional machine learning methods for image compression.

Introduction:  Image compression is an important task in image processing and computer vision. It enables faster data transmission and storage of image files. Deep learning methods have been used to improve image compression, while traditional machine learning methods have been widely used as well. This thesis aims to compare deep learning and traditional machine learning methods for image compression and examine their respective strengths and weaknesses.

12. Using deep learning for sentiment analysis in social media.

Introduction:  Sentiment analysis in social media is an important task that can help businesses and organizations understand their customers’ opinions and feedback. Deep learning has been used to improve sentiment analysis in social media, by training models on large datasets of social media text. This thesis aims to use deep learning for sentiment analysis in social media, and evaluate its performance against traditional machine learning methods.

13. Investigating the use of deep learning for image generation.

Introduction:  Image generation is a task in computer vision that involves creating new images from scratch or modifying existing images. Deep learning has been used to improve various image generation tasks, such as super-resolution, style transfer, and face generation. This thesis aims to investigate the use of deep learning for image generation and examine its potential to improve the quality and diversity of generated images.

14. Development and evaluation of deep learning models for anomaly detection in cybersecurity.

Introduction:  Anomaly detection in cybersecurity is an important task that can help detect and prevent cyber-attacks. Deep learning has been used to improve various anomaly detection tasks, such as intrusion detection and malware detection. This thesis aims to develop and evaluate deep learning models for anomaly detection in cybersecurity and examine their potential to improve the efficiency and accuracy of cybersecurity systems.

15. Investigating the use of deep learning for natural language summarization.

Introduction:  Natural language summarization is an important task in natural language processing that involves creating a condensed version of a text that preserves its main meaning. Deep learning has been used to improve various natural language summarization tasks, such as document summarization and headline generation. This thesis aims to investigate the use of deep learning for natural language summarization and examine its potential to improve the efficiency and accuracy of natural language summarization systems.

thesis ideas for machine learning

Photo by  Windows  on  Unsplash

16. Development and evaluation of deep learning models for facial expression recognition.

Introduction:  Facial expression recognition is an important task in computer vision and has many practical applications, such as human-computer interaction, emotion recognition, and psychological studies. Deep learning has been used to improve facial expression recognition, by training models on large datasets of images. This thesis aims to develop and evaluate deep learning models for facial expression recognition and examine their performance against traditional machine learning methods.

17. Investigating the use of deep learning for generative models in music and audio.

Introduction:  Music and audio synthesis is an important task in audio processing, which has many practical applications, such as music generation and speech synthesis. Deep learning has been used to improve generative models for music and audio, by training models on large datasets of audio data. This thesis aims to investigate the use of deep learning for generative models in music and audio and examine its potential to improve the quality and diversity of generated audio.

18. Study the comparison of deep learning models with traditional algorithms for anomaly detection in network traffic.

Introduction:  Anomaly detection in network traffic is an important task that can help detect and prevent cyber-attacks. Deep learning models have been used for this task, and traditional methods such as clustering and rule-based systems are widely used as well. This thesis aims to compare deep learning models with traditional algorithms for anomaly detection in network traffic and analyze the trade-offs between the models in terms of accuracy and scalability.

19. Investigating the use of deep learning for improving recommender systems.

Introduction:  Recommender systems are widely used in many applications such as online shopping, music streaming, and movie streaming. Deep learning has been used to improve the performance of recommender systems, by training models on large datasets of user-item interactions. This thesis aims to investigate the use of deep learning for improving recommender systems and compare its performance with traditional content-based and collaborative filtering approaches.

20. Development and evaluation of deep learning models for multi-modal data analysis.

Introduction:  Multi-modal data analysis is the task of analyzing and understanding data from multiple sources such as text, images, and audio. Deep learning has been used to improve multi-modal data analysis, by training models on large datasets of multi-modal data. This thesis aims to develop and evaluate deep learning models for multi-modal data analysis and analyze their potential to improve performance in comparison to single-modal models.

I hope that this article has provided you with a useful guide for your thesis research in machine learning and deep learning. Remember to conduct a thorough literature review and to include proper citations in your work, as well as to be original in your research to avoid plagiarism. I wish you all the best of luck with your thesis and your research endeavors!

Continue Learning

Using chains and agents for llm application development.

Step-by-step guide to using chains and agents in LangChain

Your Local LLM using FastAPI

FastAPI is a modern, fast, and easy-to-use web framework for building APIs with Python. It is based on the standard Python pointer type and supports features such as data validation, documentation…

Pooling Layer — Short and Simple

Here's all the information you should know about Pooling Layer in CNN

2050: What AI Foresees for the Future World

Exploring the future of AI and its potential impact on various aspects of our world by the year 2050.

Role of Artificial Intelligence in Metaverse

Exploring the Saga of Metaverse with AI

Mastering the GPT-3 Temperature Parameter with Ruby

  • ODSC EUROPE
  • AI+ Training
  • Speak at ODSC

thesis ideas for machine learning

  • Data Engineering
  • Data Visualization
  • Deep Learning
  • Generative AI
  • Machine Learning
  • NLP and LLMs
  • Business & Use Cases
  • Career Advice
  • Write for us
  • ODSC Community Slack Channel
  • Upcoming Webinars

10 Compelling Machine Learning Ph.D. Dissertations for 2020

10 Compelling Machine Learning Ph.D. Dissertations for 2020

Machine Learning Modeling Research posted by Daniel Gutierrez, ODSC August 19, 2020 Daniel Gutierrez, ODSC

As a data scientist, an integral part of my work in the field revolves around keeping current with research coming out of academia. I frequently scour arXiv.org for late-breaking papers that show trends and reveal fertile areas of research. Other sources of valuable research developments are in the form of Ph.D. dissertations, the culmination of a doctoral candidate’s work to confer his/her degree. Ph.D. candidates are highly motivated to choose research topics that establish new and creative paths toward discovery in their field of study. Their dissertations are highly focused on a specific problem. If you can find a dissertation that aligns with your areas of interest, consuming the research is an excellent way to do a deep dive into the technology. After reviewing hundreds of recent theses from universities all over the country, I present 10 machine learning dissertations that I found compelling in terms of my own areas of interest.

[Related article: Introduction to Bayesian Deep Learning ]

I hope you’ll find several that match your own fields of inquiry. Each thesis may take a while to consume but will result in hours of satisfying summer reading. Enjoy!

1. Bayesian Modeling and Variable Selection for Complex Data

As we routinely encounter high-throughput data sets in complex biological and environmental research, developing novel models and methods for variable selection has received widespread attention. This dissertation addresses a few key challenges in Bayesian modeling and variable selection for high-dimensional data with complex spatial structures. 

2. Topics in Statistical Learning with a Focus on Large Scale Data

Big data vary in shape and call for different approaches. One type of big data is the tall data, i.e., a very large number of samples but not too many features. This dissertation describes a general communication-efficient algorithm for distributed statistical learning on this type of big data. The algorithm distributes the samples uniformly to multiple machines, and uses a common reference data to improve the performance of local estimates. The algorithm enables potentially much faster analysis, at a small cost to statistical performance.

Another type of big data is the wide data, i.e., too many features but a limited number of samples. It is also called high-dimensional data, to which many classical statistical methods are not applicable. 

This dissertation discusses a method of dimensionality reduction for high-dimensional classification. The method partitions features into independent communities and splits the original classification problem into separate smaller ones. It enables parallel computing and produces more interpretable results.

3. Sets as Measures: Optimization and Machine Learning

The purpose of this machine learning dissertation is to address the following simple question:

How do we design efficient algorithms to solve optimization or machine learning problems where the decision variable (or target label) is a set of unknown cardinality?

Optimization and machine learning have proved remarkably successful in applications requiring the choice of single vectors. Some tasks, in particular many inverse problems, call for the design, or estimation, of sets of objects. When the size of these sets is a priori unknown, directly applying optimization or machine learning techniques designed for single vectors appears difficult. The work in this dissertation shows that a very old idea for transforming sets into elements of a vector space (namely, a space of measures), a common trick in theoretical analysis, generates effective practical algorithms.

4. A Geometric Perspective on Some Topics in Statistical Learning

Modern science and engineering often generate data sets with a large sample size and a comparably large dimension which puts classic asymptotic theory into question in many ways. Therefore, the main focus of this dissertation is to develop a fundamental understanding of statistical procedures for estimation and hypothesis testing from a non-asymptotic point of view, where both the sample size and problem dimension grow hand in hand. A range of different problems are explored in this thesis, including work on the geometry of hypothesis testing, adaptivity to local structure in estimation, effective methods for shape-constrained problems, and early stopping with boosting algorithms. The treatment of these different problems shares the common theme of emphasizing the underlying geometric structure.

5. Essays on Random Forest Ensembles

A random forest is a popular machine learning ensemble method that has proven successful in solving a wide range of classification problems. While other successful classifiers, such as boosting algorithms or neural networks, admit natural interpretations as maximum likelihood, a suitable statistical interpretation is much more elusive for a random forest. The first part of this dissertation demonstrates that a random forest is a fruitful framework in which to study AdaBoost and deep neural networks. The work explores the concept and utility of interpolation, the ability of a classifier to perfectly fit its training data. The second part of this dissertation places a random forest on more sound statistical footing by framing it as kernel regression with the proximity kernel. The work then analyzes the parameters that control the bandwidth of this kernel and discuss useful generalizations.

6. Marginally Interpretable Generalized Linear Mixed Models

A popular approach for relating correlated measurements of a non-Gaussian response variable to a set of predictors is to introduce latent random variables and fit a generalized linear mixed model. The conventional strategy for specifying such a model leads to parameter estimates that must be interpreted conditional on the latent variables. In many cases, interest lies not in these conditional parameters, but rather in marginal parameters that summarize the average effect of the predictors across the entire population. Due to the structure of the generalized linear mixed model, the average effect across all individuals in a population is generally not the same as the effect for an average individual. Further complicating matters, obtaining marginal summaries from a generalized linear mixed model often requires evaluation of an analytically intractable integral or use of an approximation. Another popular approach in this setting is to fit a marginal model using generalized estimating equations. This strategy is effective for estimating marginal parameters, but leaves one without a formal model for the data with which to assess quality of fit or make predictions for future observations. Thus, there exists a need for a better approach.

This dissertation defines a class of marginally interpretable generalized linear mixed models that leads to parameter estimates with a marginal interpretation while maintaining the desirable statistical properties of a conditionally specified model. The distinguishing feature of these models is an additive adjustment that accounts for the curvature of the link function and thereby preserves a specific form for the marginal mean after integrating out the latent random variables. 

7. On the Detection of Hate Speech, Hate Speakers and Polarized Groups in Online Social Media

The objective of this dissertation is to explore the use of machine learning algorithms in understanding and detecting hate speech, hate speakers and polarized groups in online social media. Beginning with a unique typology for detecting abusive language, the work outlines the distinctions and similarities of different abusive language subtasks (offensive language, hate speech, cyberbullying and trolling) and how we might benefit from the progress made in each area. Specifically, the work suggests that each subtask can be categorized based on whether or not the abusive language being studied 1) is directed at a specific individual, or targets a generalized “Other” and 2) the extent to which the language is explicit versus implicit. The work then uses knowledge gained from this typology to tackle the “problem of offensive language” in hate speech detection. 

8. Lasso Guarantees for Dependent Data

Serially correlated high dimensional data are prevalent in the big data era. In order to predict and learn the complex relationship among the multiple time series, high dimensional modeling has gained importance in various fields such as control theory, statistics, economics, finance, genetics and neuroscience. This dissertation studies a number of high dimensional statistical problems involving different classes of mixing processes. 

9. Random forest robustness, variable importance, and tree aggregation

Random forest methodology is a nonparametric, machine learning approach capable of strong performance in regression and classification problems involving complex data sets. In addition to making predictions, random forests can be used to assess the relative importance of feature variables. This dissertation explores three topics related to random forests: tree aggregation, variable importance, and robustness. 

10. Climate Data Computing: Optimal Interpolation, Averaging, Visualization and Delivery

This dissertation solves two important problems in the modern analysis of big climate data. The first is the efficient visualization and fast delivery of big climate data, and the second is a computationally extensive principal component analysis (PCA) using spherical harmonics on the Earth’s surface. The second problem creates a way to supply the data for the technology developed in the first. These two problems are computationally difficult, such as the representation of higher order spherical harmonics Y400, which is critical for upscaling weather data to almost infinitely fine spatial resolution.

I hope you enjoyed learning about these compelling machine learning dissertations.

Editor’s note: Interested in more data science research? Check out the Research Frontiers track at ODSC Europe this September 17-19 or the ODSC West Research Frontiers track this October 27-30.

thesis ideas for machine learning

Daniel Gutierrez, ODSC

Daniel D. Gutierrez is a practicing data scientist who’s been working with data long before the field came in vogue. As a technology journalist, he enjoys keeping a pulse on this fast-paced industry. Daniel is also an educator having taught data science, machine learning and R classes at the university level. He has authored four computer industry books on database and data science technology, including his most recent title, “Machine Learning and Data Science: An Introduction to Statistical Learning Methods with R.” Daniel holds a BS in Mathematics and Computer Science from UCLA.

DE Summit Square

White House Enters Debate on “Open” Versus “Closed” AI Systems

AI and Data Science News posted by ODSC Team Feb 22, 2024 According to the AP, the Biden administration has stepped into the ongoing debate surrounding the openness...

Nvidia’s Multi-Trillion Dollar Market Value Sparks Tech Rally

Nvidia’s Multi-Trillion Dollar Market Value Sparks Tech Rally

AI and Data Science News posted by ODSC Team Feb 22, 2024 In an unprecedented surge, Nvidia’s market capitalization galloped past the $2 trillion mark post-Thursday’s trading session,...

A Structured Approach to Quality Assurance for AI Product Development: 2024 Guide

A Structured Approach to Quality Assurance for AI Product Development: 2024 Guide

East 2024 Business + Management posted by ODSC Community Feb 22, 2024 Editor’s note: Kevin Rohling is a speaker for ODSC East this April 23-25. Be sure to...

AI weekly square

Thesis Topics

This list includes topics for potential bachelor or master theses, guided research, projects, seminars, and other activities. Search with Ctrl+F for desired keywords, e.g. ‘machine learning’ or others.

PLEASE NOTE: If you are interested in any of these topics, click the respective supervisor link to send a message with a simple CV, grade sheet, and topic ideas (if any). We will answer shortly.

Of course, your own ideas are always welcome!

Generating images for training Image Super-Resolution models

Type of work:.

  • Guided Research
  • deep learning
  • single image super-resolution
  • syntethic datasets / dataset generation

Description:

Typically, Single Image Super-Resolution (SISR) models train on expressive real images (e.g., DIV2K and/or Flickr2K). This work aims to rethink the need of real images for training SISR models. In other words: do we need real images to learn useful upscaling mappings? For that, the proposed work should investigate different methods for generating artificial datasets that might be suitable for SISR models, see [2]. The resulting models trained on the artifically generated training sets should then be evaluated on real test datasets (Set5, Set14, BSDS100, …) and analyze its outcomes.

  • [1] Hitchhiker’s Guide to Super-Resolution: Introduction and Recent Advances
  • [2] Learning to See by Looking at Noise

Machine Learning-based Surrogate Models for Accelerated Flow Simulations

  • Machine Learning
  • Microstructure Property Prediction
  • Surrogate Modeling

Surrogate modeling involves creating a simplified and computationally efficient machine learning model that approximates the behavior of a complex system, enabling faster predictions and analysis. For complex systems such as fluids, their behavior is governed by partial differential equations. By solving these PDEs, one can predict how a fluid behaves in a specific environment and conditions. The computational time and resources needed to solve a PDE system depend on the size of the fluid domain and the complexity of the PDE. In practical applications where multiple environments and conditions are to be studied, it becomes very expensive to generate many solutions to such PDEs. Here, modern machine learning or deep learning-based surrogate models which offer fast inference times in the online phase are of interest.

In this work, the focus will be on developing surrogate models to replace the flow simulations in fiber-reinforced composite materials governed by the Navier-Stokes equation. Using a conventional PDE solver, a dataset of reference solutions was generated for supervised learning. In this thesis, your tasks will include the conceptualization and implementation of different ML architectures suited for this task, training and evaluation of the models on the available dataset. You will start with simple fully connected architectures and later extend it to 3D convolutional architectures. Also of interest is the infusion of the available domain knowledge into the ML models, known as physics-informed machine learning.

By applying ML to fluid applications, you will learn to acquire the right amount of domain specific knowledge and analyze your results together with domain experts from the field.

If you are interested, please send me an email with your Curriculum Vitae (CV), your Transcript of records and a short statement about your background in related topics.

References:

  • Santos, J.E., Xu, D., Jo, H., Landry, C.J., Prodanović, M., Pyrcz, M.J., 2020. PoreFlow-Net: A 3D convolutional neural network to predict fluid flow through porous media. Advances in Water Resources 138, 103539. https://doi.org/10.1016/j.advwatres.2020.103539
  • Kashefi, A., Mukerji, T., 2021. Point-cloud deep learning of porous media for permeability prediction. Physics of Fluids 33, 097109. https://doi.org/10.1063/5.0063904

Segmentation of Shoe Trace Images

  • benchmarking
  • image segmentation
  • keypoint extraction
  • self-attention

Help fight crime with AI! The DFKI and the Artificial Intelligence Transferlab of the State Criminal Police Office (Landskriminalamt) are searching for master candidates eager to apply their knowledge in AI to support crime scene analysis. The student will have the opportunity to visit the Transferlab in Mainz for an in-depth introduction to the topic and full access to DFKI’s computing cluster infrastructure.

General goal: improve identification of specific markers normally present in shoe trace images acquired in crime scenes.

Specific goals:

  • [benchmarking] evaluate existing image segmentation models in the context of shoe trace analysis;
  • [research] propose a segmentation model combining semantics and keypoint information tailored to specific markers present in crime scene photographs;
  • [research] assess model performance on labeled data.
  • [research] definition of limits and requirements for the existing training- and test-data.

Retrieval of Shoe Sole Images

  • graph neural networks
  • image retrieval

General goal: improve retrieval of shoe sole images acquired in laboratory, i.e. under controlled conditions and used as reference by forensics specialists.

  • [benchmarking] evaluate existing image retrieval approaches in the context of shoe trace recognition;
  • [research] propose a graph network architecture based on keypoint information extracted from the images.
  • [research] evaluate performance of proposed model against existing methods.

Sherlock Holmes goes AI - Generative comics art of detective scenes and identikits

  • Bias in image generation models
  • Deep Learning Frameworks
  • Frontend visualization
  • Speech-To-Text, Text-to-Image Models
  • Transformers, Diffusion Models, Hugging Face

Sherlock Holmes is taking the statement of the witness. The witness is describing the appearance of the perpetrator and the forensic setting they still remember. Your task as the AI investigator will be to generate a comic sketch of the scene and phantom images of the accused person based on the spoken statement of the witness. For this you will use state-of-the-art transformers and visualize the output in an application. As AI investigator you will detect, qualify and quantify bias in the images which are produced by different generation models you have chosen.

This work is embedded in the DFKI KI4Pol lab together with the law enforcement agencies. The stories are fictional you will not work on true crime.

Requirements:

  • German level B1/2 or equivalent
  • Outstanding academic achievements
  • Motivational cover letter

Generative Adversarial Networks for Agricultural Yield Prediction

  • Deep Learning
  • Generative Adversarial Networks
  • Yield Prediction

Agricultural yield prediction has been an essential research area for many years, as it helps farmers and policymakers to make informed decisions about crop management, resource allocation, and food security. Computer vision and machine learning techniques have shown promising results in predicting crop yield, but there is still room for improvement in the accuracy and precision of these predictions. Generative Adversarial Networks (GANs) are a type of neural network that has shown success in generating realistic images, which can be leveraged for the prediction of agricultural yields.

  • ‘Goodfellow, Ian, et al. “Generative adversarial networks.” Communications of the ACM 63.11 (2020)': 139-144.
  • ‘Z. Xu, J. Du, J. Wang, C. Jiang and Y. Ren, “Satellite Image Prediction Relying on GAN and LSTM Neural Networks,” ICC 2019 - 2019 IEEE International Conference on Communications (ICC), Shanghai, China, 2019, pp. 1-6, doi’: 10.1109/ICC.2019.8761462.
  • ‘Drees, Lukas, et al. “Temporal prediction and evaluation of brassica growth in the field using conditional generative adversarial networks.” Computers and Electronics in Agriculture 190 (2021)': 106415

Knowledge Graphs für das Immobilienmanagement

  • corporate memory
  • knowledge graph

Das Management von Immobilien ist komplex und umfasst verschiedenste Informationsquellen und -objekte zur Durchführung der Prozesse. Ein Corporate Memory kann hier unterstützen in der Analyse und Abbildung des Informationsraums um Wissensdienste zu ermöglichen. Aufgabe ist es, eine Ontologie für das Immobilienmanagement zu entwerfen und beispielhaft ein Szenario zu entwickeln. Für die Materialien und Anwendungspartner sind gute Deutschkenntnisse erforderlich.

Fault and Efficiency Prediction in High Performance Computing

  • Master Thesis
  • event data modelling
  • survival modelling
  • time series

High use of resources are thought to be an indirect cause of failures in large cluster systems, but little work has systematically investigated the role of high resource usage on system failures, largely due to the lack of a comprehensive resource monitoring tool which resolves resource use by job and node. This project studies log data of the DFKI Kaiserslautern high performance cluster to consider the predictability of adverse events (node failure, GPU freeze), energy usage and identify the most relevant data within. The second supervisor for this work is Joachim Folz.

Data is available via Prometheus -compatible system:

  • Node exporter
  • DCGM exporter
  • Slurm exporter
  • Linking Resource Usage Anomalies with System Failures from Cluster Log Data
  • Deep Survival Models

Feel free to reach out if the topic sounds interesting or if you have ideas related to this work. We can then brainstorm a specific research question together. Link to my personal website.

Construction & Application of Enterprise Knowledge Graphs in the E-Invoicing Domain

  • Guided Research Project
  • knowledge graphs
  • knowledge services
  • linked data
  • semantic web

In recent years knowledge graphs received a lot of attention as well in industry as in science. Knowledge graphs consist of entities and relationships between them and allow integrating new knowledge arbitrarily. Famous instances in industry are knowledge graphs by Microsoft, Google, Facebook or IBM. But beyond these ones, knowledge graphs are also adopted in more domain specific scenarios such as in e-Procurement, e-Invoicing and purchase-to-pay processes. The objective in theses and projects is to explore particular aspects of constructing and/or applying knowledge graphs in the domain of purchase-to-pay processes and e-Invoicing.

Learning Analytics in Education

  • affective state
  • cognitive state
  • machine learning

Anomaly detection in time-series

  • explainability

Working on deep neural networks for making the time-series anomaly detection process more robust. An important aspect of this process is explainability of the decision taken by a network.

Time Series Forecasting Using transformer Networks

  • time series forecasting
  • transformer networks

Transformer networks have emerged as competent architecture for modeling sequences. This research will primarily focus on using transformer networks for forecasting time series (multivariate/ univariate) and may also involve fusing knowledge into the machine learning architecture.

On This Page

  • Machine Learning Tutorial
  • Data Analysis Tutorial
  • Python - Data visualization tutorial
  • Machine Learning Projects
  • Machine Learning Interview Questions
  • Machine Learning Mathematics
  • Deep Learning Tutorial
  • Deep Learning Project
  • Deep Learning Interview Questions
  • Computer Vision Tutorial
  • Computer Vision Projects
  • NLP Project
  • NLP Interview Questions
  • Statistics with Python
  • 100 Days of Machine Learning

Related Articles

  • Solve Coding Problems
  • How Amazon Uses Machine Learning?
  • What Are DeepFakes And How Dangerous Are They?
  • 10 Most Interesting Chatbots in the World
  • 5 Algorithms that Demonstrate Artificial Intelligence Bias
  • Impact of AI and ML On Warfare Techniques
  • What is the Role of Artificial Intelligence in Fighting Coronavirus?
  • What is Artificial Intelligence as a Service (AIaaS) in the Tech Industry?
  • Top 10 Business Intelligence Platforms in 2020
  • Artificial Intelligence Could be a Better Doctor
  • Top 7 Artificial Intelligence and Machine Learning Trends For 2022
  • What is IBM Watson and Its Services?
  • 10 Best Artificial Intelligence Project Ideas To Kick-Start Your Career
  • Is AI Really a Threat to Cybersecurity?
  • 5 Best Humanoid Robots in The World
  • Can Artificial Intelligence Help in Curing Cancer?
  • 8 Best Artificial Intelligence Books For Beginners in 2023
  • AI | Phrase and Grammar structure in Natural Language
  • What is Artificial General Intelligence (AGI)?
  • Linguistic variable And Linguistic hedges

8 Best Topics for Research and Thesis in Artificial Intelligence

Best-Topics-for-Research-and-Thesis-in-Artificial-Intelligence

1. Machine Learning

2. deep learning, 3. reinforcement learning, 4. robotics, 5. natural language processing, 6. computer vision, 7. recommender systems, 8. internet of things, please login to comment....

author

  • Artificial Intelligence
  • Machine Learning

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

Google Custom Search

Wir verwenden Google für unsere Suche. Mit Klick auf „Suche aktivieren“ aktivieren Sie das Suchfeld und akzeptieren die Nutzungsbedingungen.

Hinweise zum Einsatz der Google Suche

Technical University of Munich

  • Data Analytics and Machine Learning Group
  • TUM School of Computation, Information and Technology
  • Technical University of Munich

Technical University of Munich

Open Topics

We offer multiple Bachelor/Master theses, Guided Research projects and IDPs in the area of data mining/machine learning. A  non-exhaustive list of open topics is listed below.

If you are interested in a thesis or a guided research project, please send your CV and transcript of records to Prof. Stephan Günnemann via email and we will arrange a meeting to talk about the potential topics.

Generative Models for Drug Discovery

Type:  Mater Thesis / Guided Research

Prerequisites:

  • Strong machine learning knowledge
  • Proficiency with Python and deep learning frameworks (PyTorch or TensorFlow)
  • Knowledge of graph neural networks (e.g. GCN, MPNN)
  • No formal education in chemistry, physics or biology needed!

Description:

Effectively designing molecular geometries is essential to advancing pharmaceutical innovations, a domain which has experienced great attention through the success of generative models. These models promise a more efficient exploration of the vast chemical space and generation of novel compounds with specific properties by leveraging their learned representations, potentially leading to the discovery of molecules with unique properties that would otherwise go undiscovered. Our topics lie at the intersection of generative models like diffusion/flow matching models and graph representation learning, e.g., graph neural networks. The focus of our projects can be model development with an emphasis on downstream tasks ( e.g., diffusion guidance at inference time ) and a better understanding of the limitations of existing models.

Contact :  Johanna Sommer , Leon Hetzel

References:

Equivariant Diffusion for Molecule Generation in 3D

Equivariant Flow Matching with Hybrid Probability Transport for 3D Molecule Generation

Structure-based Drug Design with Equivariant Diffusion Models

Data Pruning and Active Learning

Type: Interdisciplinary Project (IDP) / Hiwi / Guided Research / Master's Thesis

  • Strong knowledge in machine learning
  • Very good coding skills
  • Proficiency with Python and deep learning frameworks (TensorFlow or PyTorch)

Data pruning and active learning are vital techniques in scaling machine learning applications efficiently. Data pruning involves the removal of redundant or irrelevant data, which enables training models with considerably less data but the same performance. Similarly, active learning describes the process of selecting the most informative data points for labeling, thus reducing annotation costs and accelerating model training. However, current methods are often computationally expensive, which makes them difficult to apply in practice. Our objective is to scale active learning and data pruning methods to large datasets using an extrapolation-based approach.

Contact: Sebastian Schmidt , Tom Wollschläger , Leo Schwinn

  • Large-scale Dataset Pruning with Dynamic Uncertainty

Efficient Machine Learning: Pruning, Quantization, Distillation, and More - DAML x Pruna AI

Type: Master's Thesis / Guided Research / Hiwi

The efficiency of machine learning algorithms is commonly evaluated by looking at target performance, speed and memory footprint metrics. Reduce the costs associated to these metrics is of primary importance for real-world applications with limited ressources (e.g. embedded systems, real-time predictions). In this project, you will work in collaboration with the DAML research group and the Pruna AI startup on investigating solutions to improve the efficiency of machine leanring models by looking at multiple techniques like pruning, quantization, distillation, and more.

Contact: Bertrand Charpentier

  • The Efficiency Misnomer
  • A Gradient Flow Framework for Analyzing Network Pruning
  • Distilling the Knowledge in a Neural Network
  • A Survey of Quantization Methods for Efficient Neural Network Inference

Deep Generative Models

Type:  Master Thesis / Guided Research

  • Strong machine learning and probability theory knowledge
  • Knowledge of generative models and their basics (e.g., Normalizing Flows, Diffusion Models, VAE)
  • Optional: Neural ODEs/SDEs, Optimal Transport, Measure Theory

With recent advances, such as Diffusion Models, Transformers, Normalizing Flows, Flow Matching, etc., the field of generative models has gained significant attention in the machine learning and artificial intelligence research community. However, many problems and questions remain open, and the application to complex data domains such as graphs, time series, point processes, and sets is often non-trivial. We are interested in supervising motivated students to explore and extend the capabilities of state-of-the-art generative models for various data domains.

Contact : Marcel Kollovieh , Marten Lienen ,  David Lüdke

  • Flow Matching for Generative Modeling
  • Auto-Encoding Variational Bayes
  • Denoising Diffusion Probabilistic Models 
  • Structured Denoising Diffusion Models in Discrete State-Spaces

Graph Structure Learning

Type:  Guided Research / Hiwi

  • Optional: Knowledge of graph theory and mathematical optimization

Graph deep learning is a powerful ML concept that enables the generalisation of successful deep neural architectures to non-Euclidean structured data. Such methods have shown promising results in a vast range of applications spanning the social sciences, biomedicine, particle physics, computer vision, graphics and chemistry. One of the major limitations of most current graph neural network architectures is that they often rely on the assumption that the underlying graph is known and fixed. However, this assumption is not always true, as the graph may be noisy or partially and even completely unknown. In the case of noisy or partially available graphs, it would be useful to jointly learn an optimised graph structure and the corresponding graph representations for the downstream task. On the other hand, when the graph is completely absent, it would be useful to infer it directly from the data. This is particularly interesting in inductive settings where some of the nodes were not present at training time. Furthermore, learning a graph can become an end in itself, as the inferred structure can provide complementary insights with respect to the downstream task. In this project, we aim to investigate solutions and devise new methods to construct an optimal graph structure based on the available (unstructured) data.

Contact : Filippo Guerranti

  • A Survey on Graph Structure Learning: Progress and Opportunities
  • Differentiable Graph Module (DGM) for Graph Convolutional Networks
  • Learning Discrete Structures for Graph Neural Networks

NodeFormer: A Scalable Graph Structure Learning Transformer for Node Classification

Graph Neural Networks

Type:  Master's thesis / Bachelor's thesis / guided research

  • Knowledge of graph/network theory

Graph neural networks (GNNs) have recently achieved great successes in a wide variety of applications, such as chemistry, reinforcement learning, knowledge graphs, traffic networks, or computer vision. These models leverage graph data by updating node representations based on messages passed between nodes connected by edges, or by transforming node representation using spectral graph properties. These approaches are very effective, but many theoretical aspects of these models remain unclear and there are many possible extensions to improve GNNs and go beyond the nodes' direct neighbors and simple message aggregation.

Contact: Simon Geisler

  • Semi-supervised classification with graph convolutional networks
  • Relational inductive biases, deep learning, and graph networks
  • Diffusion Improves Graph Learning
  • Weisfeiler and leman go neural: Higher-order graph neural networks
  • Reliable Graph Neural Networks via Robust Aggregation

Physics-aware Graph Neural Networks

Type:  Master's thesis / guided research

  • Proficiency with Python and deep learning frameworks (JAX or PyTorch)
  • Knowledge of graph neural networks (e.g. GCN, MPNN, SchNet)
  • Optional: Knowledge of machine learning on molecules and quantum chemistry

Deep learning models, especially graph neural networks (GNNs), have recently achieved great successes in predicting quantum mechanical properties of molecules. There is a vast amount of applications for these models, such as finding the best method of chemical synthesis or selecting candidates for drugs, construction materials, batteries, or solar cells. However, GNNs have only been proposed in recent years and there remain many open questions about how to best represent and leverage quantum mechanical properties and methods.

Contact: Nicholas Gao

  • Directional Message Passing for Molecular Graphs
  • Neural message passing for quantum chemistry
  • Learning to Simulate Complex Physics with Graph Network
  • Ab initio solution of the many-electron Schrödinger equation with deep neural networks
  • Ab-Initio Potential Energy Surfaces by Pairing GNNs with Neural Wave Functions
  • Tensor field networks: Rotation- and translation-equivariant neural networks for 3D point clouds

Robustness Verification for Deep Classifiers

Type: Master's thesis / Guided research

  • Strong machine learning knowledge (at least equivalent to IN2064 plus an advanced course on deep learning)
  • Strong background in mathematical optimization (preferably combined with Machine Learning setting)
  • Proficiency with python and deep learning frameworks (Pytorch or Tensorflow)
  • (Preferred) Knowledge of training techniques to obtain classifiers that are robust against small perturbations in data

Description : Recent work shows that deep classifiers suffer under presence of adversarial examples: misclassified points that are very close to the training samples or even visually indistinguishable from them. This undesired behaviour constraints possibilities of deployment in safety critical scenarios for promising classification methods based on neural nets. Therefore, new training methods should be proposed that promote (or preferably ensure) robust behaviour of the classifier around training samples.

Contact: Aleksei Kuvshinov

References (Background):

  • Intriguing properties of neural networks
  • Explaining and harnessing adversarial examples
  • SoK: Certified Robustness for Deep Neural Networks
  • Certified Adversarial Robustness via Randomized Smoothing
  • Formal guarantees on the robustness of a classifier against adversarial manipulation
  • Towards deep learning models resistant to adversarial attacks
  • Provable defenses against adversarial examples via the convex outer adversarial polytope
  • Certified defenses against adversarial examples
  • Lipschitz-margin training: Scalable certification of perturbation invariance for deep neural networks

Uncertainty Estimation in Deep Learning

Type: Master's Thesis / Guided Research

  • Strong knowledge in probability theory

Safe prediction is a key feature in many intelligent systems. Classically, Machine Learning models compute output predictions regardless of the underlying uncertainty of the encountered situations. In contrast, aleatoric and epistemic uncertainty bring knowledge about undecidable and uncommon situations. The uncertainty view can be a substantial help to detect and explain unsafe predictions, and therefore make ML systems more robust. The goal of this project is to improve the uncertainty estimation in ML models in various types of task.

Contact: Tom Wollschläger ,   Dominik Fuchsgruber ,   Bertrand Charpentier

  • Can You Trust Your Model’s Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift
  • Predictive Uncertainty Estimation via Prior Networks
  • Posterior Network: Uncertainty Estimation without OOD samples via Density-based Pseudo-Counts
  • Evidential Deep Learning to Quantify Classification Uncertainty
  • Weight Uncertainty in Neural Networks

Hierarchies in Deep Learning

Type:  Master's Thesis / Guided Research

Multi-scale structures are ubiquitous in real life datasets. As an example, phylogenetic nomenclature naturally reveals a hierarchical classification of species based on their historical evolutions. Learning multi-scale structures can help to exhibit natural and meaningful organizations in the data and also to obtain compact data representation. The goal of this project is to leverage multi-scale structures to improve speed, performances and understanding of Deep Learning models.

Contact: Marcel Kollovieh , Bertrand Charpentier

  • Tree Sampling Divergence: An Information-Theoretic Metricfor Hierarchical Graph Clustering
  • Hierarchical Graph Representation Learning with Differentiable Pooling
  • Gradient-based Hierarchical Clustering
  • Gradient-based Hierarchical Clustering using Continuous Representations of Trees in Hyperbolic Space

M.Tech/Ph.D Thesis Help in Chandigarh | Thesis Guidance in Chandigarh

thesis ideas for machine learning

[email protected]

thesis ideas for machine learning

+91-9465330425

thesis ideas for machine learning

Latest thesis topics in Machine Learning for research scholars:

Choosing a research and thesis topics in Machine Learning is the first choice of masters and Doctorate scholars now a days. Though, choosing and working on a thesis topic in machine learning is not an easy task as Machine learning uses certain statistical algorithms to make computers work in a certain way without being explicitly programmed. The algorithms receive an input value and predict an output for this by the use of certain statistical methods. The main aim of machine learning is to create intelligent machines which can think and work like human beings. Achieving the above mentioned goals is surely not very easy because of which students who choose research topic in machine learning face difficult challenges and require professional thesis help in their thesis work.

Below is the list of the latest thesis topics in Machine learning for research scholars:

  • The classification technique for the face spoof detection in artificial neural networks using concepts of machine learning .
  • The iris detection and reorganization system using classification and glcm algorithm in machine learning.
  • Using machine learning algorithms in the detection of pattern system using algorithm of textual feature analysis and classification
  • The plant disease detection using glcm and KNN classification in neural networks merged with the concepts of machine learning
  • Using the algorithms of machine learning to propose technique for the prediction analysis in data mining
  • The sentiment analysis technique using SVM classifier in data mining using machine learning approach
  • The heart disease prediction using technique of classification in machine learning using the concepts of data mining.

So let’s start with machine learning.

First of all…

What exactly is machine learning?

Find the link at the end to download the latest topics for thesis and research in Machine Learning

What is Machine Learning?

thesis ideas for machine learning

Machine Learning is a branch of artificial intelligence that gives systems the ability to learn automatically and improve themselves from the experience without being explicitly programmed or without the intervention of human. Its main aim is to make computers learn automatically from the experience.

Requirements of creating good machine learning systems

So what is required for creating such machine learning systems? Following are the things required in creating such machine learning systems:

Data – Input data is required for predicting the output.

Algorithms – Machine Learning is dependent on certain statistical algorithms to determine data patterns.

Automation – It is the ability to make systems operate automatically.

Iteration – The complete process is iterative i.e. repetition of process.

Scalability – The capacity of the machine can be increased or decreased in size and scale.

Modeling – The models are created according to the demand by the process of modeling.

Methods of Machine Learning

thesis ideas for machine learning

Machine Learning methods are classified into certain categories These are:

  • Supervised Learning
  • Unsupervised Learning

Reinforcement Learning

Supervised Learning – In this method, input and output is provided to the computer along with feedback during the training. The accuracy of predictions by the computer during training is also analyzed. The main goal of this training is to make computers learn how to map input to the output.

Unsupervised Learning – In this case, no such training is provided leaving computers to find the output on its own. Unsupervised learning is mostly applied on transactional data. It is used in more complex tasks. It uses another approach of iteration known as deep learning to arrive at some conclusions.

Reinforcement Learning – This type of learning uses three components namely – agent, environment, action. An agent is the one that perceives its surroundings, an environment is the one with which an agent interacts and acts in that environment. The main goal in reinforcement learning is to find the best possible policy.

How does machine learning work?

thesis ideas for machine learning

Machine learning makes use of processes similar to that of data mining. Machine learning algorithms are described in terms of target function(f) that maps input variable (x) to an output variable (y). This can be represented as:

There is also an error e which is the independent of the input variable x. Thus the more generalized form of the equation is:

In machine the mapping from x to y is done for predictions. This method is known as predictive modeling to make most accurate predictions. There are various assumptions for this function.

Benefits of Machine Learning

mtech thesis topics in machine learning

Everything is dependent on machine learning. Find out what are the benefits of machine learning.

Decision making is faster – Machine learning provides the best possible outcomes by prioritizing the routine decision-making processes.

Adaptability – Machine Learning provides the ability to adapt to new changing environment rapidly. The environment changes rapidly due to the fact that data is being constantly updated.

Innovation – Machine learning uses advanced algorithms that improve the overall decision-making capacity. This helps in developing innovative business services and models.

Insight – Machine learning helps in understanding unique data patterns and based on which specific actions can be taken.

Business growth – With machine learning overall business process and workflow will be faster and hence this would contribute to the overall business growth and acceleration.

Outcome will be good – With machine learning the quality of the outcome will be improved with lesser chances of error.

Branches of Machine Learning

  • Computational Learning Theory
  • Adversarial Machine Learning
  • Quantum Machine Learning
  • Robot Learning
  • Meta-Learning

Computational Learning Theory – Computational learning theory is a subfield of machine learning for studying and analyzing the algorithms of machine learning. It is more or less similar to supervised learning.

Adversarial Machine Learning – Adversarial machine learning deals with the interaction of machine learning and computer security. The main aim of this technique is to look for safer methods in machine learning to prevent any form of spam and malware. It works on the following three principles:

Finding vulnerabilities in machine learning algorithms.

Devising strategies to check these potential vulnerabilities.

Implementing these preventive measures to improve the security of the algorithms.

Quantum Machine Learning – This area of machine learning deals with quantum physics. In this algorithm, the classical data set is translated into quantum computer for quantum information processing. It uses Grover’s search algorithm to solve unstructured search problems.

Predictive Analysis – Predictive Analysis uses statistical techniques from data modeling, machine learning and data mining to analyze current and historical data to predict the future. It extracts information from the given data. Customer relationship management(CRM) is the common application of predictive analysis.

Robot Learning – This area deals with the interaction of machine learning and robotics. It employs certain techniques to make robots to adapt to the surrounding environment through learning algorithms.

Grammar Induction – It is a process in machine learning to learn formal grammar from a given set of observations to identify characteristics of the observed model. Grammar induction can be done through genetic algorithms and greedy algorithms.

Meta-Learning – In this process learning algorithms are applied on meta-data and mainly deals with automatic learning algorithms.

Best Machine Learning Tools

Here is a list of artificial intelligence and machine learning tools for developers:

ai-one – It is a very good tool that provides software development kit for developers to implement artificial intelligence in an application.

Protege – It is a free and open-source framework and editor to build intelligent systems with the concept of ontology. It enables developers to create, upload and share applications.

IBM Watson – It is an open-API question answering system that answers questions asked in natural language. It has a collection of tools which can be used by developers and in business.

DiffBlue – It is another tool in artificial intelligence whose main objective is to locate bugs, errors and fix weaknesses in the code. All such things are done through automation.

TensorFlow – It is an open-source software library for machine learning. TensorFlow provides a library of numerical computations along with documentation, tutorials and other resources for support.

Amazon Web Services – Amazon has launched toolkits for developers along with applications which range from image interpretation to facial recognition.

OpenNN – It is an open-source, high-performance library for advanced analytics and is written in C++ programming language. It implements neural networks. It has a lot of tutorials and documentation along with an advanced tool known as Neural Designer.

Apache Spark – It is a framework for large-scale processing of data. It also provides a programming tool for deep learning on various machines.

Caffe – It is a framework for deep learning and is used in various industrial applications in the area of speech, vision and expression.

Veles – It is another deep learning platform written in C++ language and make use of python language for interaction between the nodes.

Machine Learning Applications

Following are some of the applications of machine learning:

Cognitive Services

Medical Services

Language Processing

Business Management

Image Recognition

Face Detection

Video Games

Computer Vision

Pattern Recognition

Machine Learning in Bioinformatics

Bioinformatics term is a combination of two terms bio, informatics. Bio means related to biology and informatics means information. Thus bioinformatics is a field that deals with processing and understanding of biological data using computational and statistical approach. Machine Learning has a number of applications in the area of bioinformatics. Machine Learning find its application in the following subfields of bioinformatics:

Genomics – Genomics is the study of DNA of organisms. Machine Learning systems can help in finding the location of protein-encoding genes in a DNA structure. Gene prediction is performed by using two types of searches named as extrinsic and intrinsic. Machine Learning is used in problems related to DNA alignment.

Proteomics – Proteomics is the study of proteins and amino acids. Proteomics is applied to problems related to proteins like protein side-chain prediction, protein modeling, and protein map prediction.

Microarrays – Microarrays are used to collect data about large biological materials. Machine learning can help in the data analysis, pattern prediction and genetic induction. It can also help in finding different types of cancer in genes.

System Biology – It deals with the interaction of biological components in the system. These components can be DNA, RNA, proteins and metabolites. Machine Learning help in modeling these interactions.

Text mining – Machine learning help in extraction of knowledge through natural language processing techniques.

Deep Learning

thesis ideas for machine learning

Deep Learning is a part of the broader field machine learning and is based on data representation learning. It is based on the interpretation of artificial neural network. Deep Learning algorithm uses many layers of processing. Each layer uses the output of previous layer as an input to itself. The algorithm used can be supervised algorithm or unsupervised algorithm. Deep Learning is mainly developed to handle complex mappings of input and output. It is another hot topic for M.Tech thesis and project along with machine learning.

Deep Neural Network

Deep Neural Network is a type of Artificial Neural Network with multiple layers which are hidden between the input layer and the output layer. This concept is known as feature hierarchy and it tends to increase the complexity and abstraction of data. This gives network the ability to handle very large, high-dimensional data sets having millions of parameters. The procedure of deep neural networks is as follows:

Consider some examples from a sample dataset.

Calculate error for this network.

Improve weight of the network to reduce the error.

Repeat the procedure.

Applications of Deep Learning

Here are some of the applications of Deep Learning:

Automatic Speech Recognition

Natural Language Processing

Customer Relationship Management

Bioinformatics

Mobile Advertising

Advantages of Deep Learning

Deep Learning helps in solving certain complex problems with high speed which were earlier left unsolved. Deep Learning is very useful in real world applications. Following are some of the main advantages of deep learning:

Eliminates unnecessary costs – Deep Learning helps to eliminate unnecessary costs by detecting defects and errors in the system.

Identifies defects which otherwise are difficult to detect – Deep Learning helps in identifying defects which left untraceable in the system.

Can inspect irregular shapes and patterns – Deep Learning can inspect irregular shapes and patterns which is difficult for machine learning to detect.

From this introduction, you must have known that why this topic is called as hot for your M.Tech thesis and projects. This was just the basic introduction to machine learning and deep learning. There is more to explore in these fields. You will get to know more once you start doing research on this topic for your M.Tech thesis. You can get thesis assistance and guidance on this topic from experts specialized in this field.

Research and Thesis Topics in Machine Learning

Here is the list of current research and thesis topics in Machine Learning :

Machine Learning Algorithms

Supervised Machine Learning

Unsupervised Machine Learning

Neural Networks

Predictive Learning

Bayesian Network

Data Mining

For starting with Machine Learning, you need to know some algorithms. Machine Learning algorithms are classified into three categories which provide the base for machine learning. These categories of algorithms are supervised learning, unsupervised learning, and reinforcement learning. The choice of algorithms depends upon the type of tasks you want to be done along with the type, quality, and nature of data present. The role of input data is crucial in machine learning algorithms.

Computer Vision is a field that deals with making systems that can read and interpret images. In simple terms, computer vision is a method of transmitting human intelligence and vision in machines. In computer vision, data is collected from images which are imparted to systems. The system will take action according to the information it interprets from what it sees.

It is a good topic for machine learning masters thesis. It is a type of machine learning algorithm in which makes predictions based on known data-sets. Input and output is provided to the system along with feedback. Supervised Learning is further classified into classification and regression problems. In the classification problem, the output is a category while in regression problem the output is a real value.

It is another category of machine learning algorithm in which input is known but the output is not known. Prior training is not provided to the system as in case of supervised learning. The main purpose of unsupervised learning is to model the underlying structure of data. Clustering and Association are the two types of unsupervised learning problems. k-means and Apriori algorithm are the examples of unsupervised learning algorithms.

Deep Learning is a hot topic in Machine Learning. It is already explained above. It is a part of the family of machine learning and deals with the functioning of the artificial neural network. Neural Networks are used to study the functioning of the human brain. It is one of the growing and exciting field. Deep learning has made it possible for the practical implementation of various machine learning applications.

Neural Networks are the systems to study the biological neural networks. It is an important application of machine learning and a good topic for masters thesis and research. The main purpose of Artificial Neural Network is to study how the human brain works. It finds its application in computer vision, speech recognition, machine translation etc. Artificial Neural Network is a collection of nodes which represent neurons.

Reinforcement Learning is a category of machine learning algorithms. Reinforcement Learning deals with software agents to study how these agents take actions in an environment in order to maximize their performance. Reinforcement Learning is different from supervised learning in the sense that correct input and output parameters are not provided.

Predictive Learning is another good topic for thesis in machine learning. In this technique, a model is built by an agent of its environment in which it performs actions. There is another field known as predictive analytics which is used to make predictions about future events which are unknown. For this, techniques like data mining, statistics, modeling, machine learning, and artificial intelligence are used.

It is a network that represents probabilistic relationships via Directed Acyclic Graph(DAG). There are algorithms in Bayesian Network for inference and learning. In the network, a probability function is there for each node which takes an input to give probability to the value associated with the node. Bayesian Network finds its application in bioinformatics, image processing, and computational biology.

Data Mining is the process of finding patterns from large data-sets to extract valuable information to make better decisions. It is a hot area of research. This technology use method from machine learning, statistics, and database systems for processing. There exist data mining techniques like clustering, association, decision trees, classification for the data mining process.

Click on the following link to download the latest thesis and research topics in Machine Learning

Latest Thesis and Research Topics on Machine Learning(pdf)

For more details Contact Us.  You can call us on this number +91-9465330425 or drop an email at   [email protected]   for any type of dissertation help in India. You can also fill the query form on the website. 

You can also visit our website Techsparks and follow us on Pinterest , Facebook , Twitter, YouTube and Linkedin for latest thesis blog.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Quick Enquiry

Get a quote, share your details to get free.

Kindson The Genius

Kindson The Genius

Providing the best learning experience for professionals

10 Machine Learning Project (Thesis) Topics for 2020

kindsonthegenius

Are you looking for some interesting project ideas for your thesis, project or dissertation? Then be sure that a machine learning topic would be a very good topic to write on. I have outlined 10 different topics. These topics are really good because you can easily obtain the dataset (i will provide the link to the dataset) and you can as well get some support from me. Let me know if you need any support in preparing your thesis.

You can leave a comment below in the comment area.

thesis ideas for machine learning

1.  Machine Learning Model for Classification and Detection of Breast Cancer (Classification)

The data is provided by the Oncology department and details instances and related attributes which are nine in all.

You can obtain the dataset from here

2. Intelligent Internet Ads Generation (Classification)

This is one of the most interesting topics for me. The reason is because the revenue generated or expended by ads campaign depends not just on the volume of the ads, but also on the relevance of the ads. Therefore it is possible to increase revenue and reduce spending by developing a Machine Learning model that select relevants ads with a high level of accuracy.  The dataset provides a collection of ads as well as the structure and geometry of the ads.

Get the ads dataset from here

3. Feature Extraction for National Census Data (Clustering)

This looks like big data stuff. But no! It’s simply dataset you can use for analysis. It is the actual data obtained from the US census in 1990. There are 68 attributes for each of the records and clustering would be performed to identify trends in the data.

You can obtain census the dataset from here

4. Movie Outcome Prediction (Classification)

This is quite a tasking project but its quite interesting. Before now, there exists models to predict the ratings of movies on a scale of 0 to 10 or 1 to 5. But this takes it a step further. You actually need to determine the outcome of the movie.  The data set is a large multivariate dataset of movie director, cast, individual roles of the actor, remarks, studio and relevant documents.

You can get the movies dataset from here

5. Forest Fire Area Coverage Prediction (Regression)

This project have been classified as difficult but I don’t think so. The objective to predict the the area affected by forest fires. Dataset include relevant meteological information and other parameters taken from a region of Portugal.

You can get the fire dataset from here

6. Atmospheric Ozone Level Analysis and Detection (Clustering)

Two ground ozone datasets are provided for this. Data includes temperatures at various times of the day as well as wind speed. The data included in the dataset was collected in a span of 6 years from 1998 to 2004.

You can get the Ozone dataset from here

7. Crime Prediction in New York City (Regression)

If you have watched the movie, ‘Person of Interest’ directed by Jonathan Nolan, then you will appreciate the fact that there is a possibility of predicting  violent criminal activities before they actually occur. Dataset would contain historical data on crime rate, types of crimes occurrence per region.

You can get the crime dataset from here

8. Sentiment Analysis on Amazon ECommerce User Reviews (Classification)

The dataset for this project is derived from user review comments from Amazon users. The model should be able to perform analysis on the training dataset and come up with a model that classifies the reviews based on sentiments. Granularity can be improved by generating predictions based on location and other factors.

You can get the reviews dataset from here

9. Home Eletrical Power Consumption Analysis (Regression)

Everyone uses electricity at home. Or rather, almost everyone! Would is not be great to have a system that helps to predict electricity consumption. Training dataset provided for this project includes feature set such as the size of the home, duration and more

You can get the dataset from here

10. Predictive Modelling of Individual Human Knowledge (Classification and Clustering)

Here the available dataset provide a collection of data about an individual on a subject matter. You are required to create a model that would try to quantify the amount of knowledge the individual have on the given subject. You can be creating by trying to also infer the performance of the user on certain exams.

I hope these 10 Machine Learning Project topic would be helpful to you.

Thanks for reading and do leave a comment below if you need some support

User Avatar

kindsonthegenius

Kindson Munonye is currently completing his doctoral program in Software Engineering in Budapest University of Technology and Economics

You might also like

Pca tutorial 2 – how to perform principal components analysis (pca), introduction to higher order singular value decomposition (hosvd), from programmer to data scientist – 5 steps, 2 thoughts on “ 10 machine learning project (thesis) topics for 2020 ”.

Is there any suggestion related to educational data mining?

I’m working on this. You can subscribe to my channel so when I make the update, you can get notified https://www.youtube.com/channel/UCvHgEAcw6VpcOA3864pSr5A

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

PHD PRIME

Thesis Topics for Machine Learning

  Machine learning is one of the recently growing fields of research for classification, clustering, and prediction of input data . The techniques of ensembles and hybridization have contributed to the improvement of machine learning models . As a result, the speed of computation, operation, accuracy, and robustness of the machine learning models are enhanced. Through this article, you can get an overview of novel machine learning models, their design, performance, merits, and uses explained via a new taxonomic approach . Also, you can get all essential details regarding any thesis topics for machine learning from this page.

At present, there are many new ensembles and hybridized machine learning models being introduced and developed .  Here the essentials of thesis writing are presented to you by our world-class certified writers and developers. What are the essential elements of a thesis statement?

  • First of all you have to understand that thesis statement writing is the most crucial process which involves a lot of time and thinking
  • Enough research data and evidence have to be gathered before writing a thesis statement
  • The main Idea or the objective has to be presented clearly with supporting evidence
  • Also remember that the thesis statement should be in accordance with the argument where adjustments are allowed

Usually, research scholars interact with our writers and experts for all aspects of thesis writing in machine learning. So we insist that you contact us much before you start your thesis so that you can have a clear-cut vision and well-strategized approach towards writing the best thesis.

Top 5 Research Thesis Topics for Machine Learning

Let us now have an idea about various headings to be included in any thesis topics for machine learning.

  • Introduction – overview of the thesis
  • Related / Existing works – presents of existing research
  • Problems definition/statements – identify and highlight the problems
  • Research methodology – convey the proposed concepts
  • Results and Discussion – discuss the results of the proposed works with previous works
  • Conclusion and future work – present the results of the proposed work

The introduction is the very first part of your thesis. It is the way by which you tend to create the first impression in the minds of the readers. What are the parts of the introduction in the thesis?

  • The issue under examination is the core of an overview
  • Main Idea and assertion has to be mentioned clearly
  • Thesis statement and argument forms the fundamental aspect here
  • Address the audience to prove to them that they are at the right place
  • Scope of your paper should be mentioned satisfactorily
  • The Planning based approach that you used to conduct research

In general, the choice of words, tone, approach, and language decide the quality of a thesis likewise the introduction. Our technical team and expert writers have gained enough experience writing thesis topics in machine learning. The amount of field knowledge and expertise that we gathered is quite large which will be of great use to you. Let us now talk about the next most important topic of a thesis called the issue

What are the guidelines for thesis writing? 

Under the heading of the issue, the following aspects of research are to be included

  • The background history about your result issue or concern solving which is stated as your objective
  • The impact of the issue in this field
  • Important characteristic features that affect the issue
  • Potential research solutions that are undertaken for research

With the massive amount of reliable and authentic research materials that we provide, you can surely get all the necessary information to include in the issues part of your thesis. Also, our engineers and technical team are here to solve any kind of technical queries that you may get. Let us now talk about the literature review  

LITERATURE REVIEW 

  • With important references and constructs from standard textbooks journals and relevant publications you need to make the following descriptions
  • Relevant theory
  • Issue explanation
  • Potential solution
  • Theoretical constructs
  • Explanation on major theories
  • Empirical literature from journal articles are considered for the following aspects
  • Explanation on latest empirical studies
  • Summary of the methodology adopted
  • Important findings of the study
  • Constraints associated with your findings
  • The pathway of your research study has to be organized in line with the literature review to make keynotes on the following
  • The referred definitions and concepts
  • Unique aspects of the issues under examination
  • Suitable method of your research

If you are searching for the best and most reliable online research guide for all kinds of thesis topics in machine learning then you are here at the right place. You can get professional and customized research support aligned with your institutional format from our experts. Let us now look into the method section in detail below  

The following are the different aspects that you need to incorporate in the methods section of your thesis

  • The research questions and issues under your examination
  • Description of proposed works like data collection
  • Rationale justification for the method of your choice

In addition to these aspects, you need to provide a clear description of all the research methods that you adopt in your study. For this purpose here are our research experts who will provide you with details on novel and innovative approaches useful for your research . You can also get concise and precise quantitative research data from us. Let us now look into this section of results

RESULTS AND DISCUSSION

On the page of results and discussion you need to incorporate the following aspects

  • Description of major findings
  • Visualization tools like charts, graphs, and tables to present the findings
  • Relevant previous studies and results
  • Creative and new results that you obtained
  • Scopes to expand the previous studies with your findings
  • Constraints of your study

The support of technical experts can help you do the best research work in machine learning . The interested researcher plus reliable and experienced research support makes the best PhD work possible. With our guidance, you get access to the best combo needed to carry out your research. Let’s now discuss the conclusions part  

Conclusion and recommendation

In the part of conclusion, you need to include the following aspects

  • Recap of issues being discussed
  • Methods used and major findings
  • Comparison between the original objective and accomplished results
  • Scope for future expansion of your research

For each and every aspect of your machine learning PhD thesis , you can get complete support from our experts. In this respect let us now look to the topmost machine learning thesis topics below  

Top 5 Thesis Topics for machine learning

  • Machine learning is of great importance to physicians in the following perspectives
  • Chatbots for speech recognition
  • Pattern recognition for disease detection
  • Treatment recommendation
  • Detecting cancerous cells
  • Body fluid analysis
  • Identification of phenotypes in case of rare diseases
  • Classifying data into groups for fault detection is possible using machine learning
  • The following are some real-time examples for predictive analysis
  • Fraudulent and legitimate transaction
  • Improvement of prediction mechanism for detecting faults
  • From the basics of developing products to predicting the stock market and real estate prices, predictive analytics is of greater importance
  • Using a trading algorithm that makes use of a proper strategy for financing huge volumes of security is called statistical arbitrage
  • Real-time examples of statistical arbitrage
  • Analysis of huge data sets
  • Algorithm-based trading for market microstructural analysis
  • Real-time arbitrage possibilities
  • Machine learning is used to enhance the strategy for statistical arbitrage as a result of which advanced results can be obtained
  • In order to help the predictive analytics mechanisms to obtain increased accuracy feature extraction using machine learning plays a significant role
  • Dataset annotations can be performed with greater significance using machine learning extraction methods where structured data can be extracted from unstructured information
  • Real-time examples of machine learning-based feature extraction include the following
  • Vocal cord disorder prediction
  • Mechanism for prevention diagnosis and treatment of many disorders
  • Detecting and solving many physiological problems in a Swift manner
  • Extraction of critical information becomes easy with machine learning even when large volumes of data are being processed
  • Machine learning methodologies can be used for translating speech into texts
  • Recorded speech and real-time voice can be converted into text using machine learning systems designed for this purpose
  • Speech can also be classified based on intensity, time, and frequency
  • Voice search, appliance control, and voice dialing are the main real-time examples of speech recognition

In order to get confidential research guidance from world-class experts on all these thesis topics for machine learning, you can feel free to contact us. With more than 15 years of customer satisfaction, we are providing in-depth Research and advanced project support for all thesis topics for machine learning . Our thesis writing support also includes the following aspects

  • Multiple revisions
  • Complete grammatical check
  • Formatting and editing
  • Benchmark reference and citations from topmost journals
  • Work privacy
  • Internal review

We ensure all these criteria are conferred to you by world-class certified engineers, developers, and writers. So you can avail of our services with elevated confidence. We are here to support you fully. Let us now see some important machine learning methods in the following  

Machine learning methods

Machine learning techniques are most often used in cases of making automatic decisions for any kind of input that they are trained and implemented for. Therefore machine learning approaches are expected to support the following aspects in decision making.

  • Maximum accuracy of recommendations
  • In-depth understanding and analysis before deciding to increase the trustworthiness

The decision-making approach using machine learning methods provides for higher accuracy in prediction and advanced comprehensible models respectively in implicit and explicit learning. For all your doubts and queries regarding the above-mentioned machine learning and decision-making approaches , you may feel free to contact us at any time of your convenience. Our technical team is highly experienced and skilled in resolving any kind of queries . Let us now see the important machine learning algorithms  

Machine learning algorithms

Machine learning algorithms are very much diverse that they can be oriented into various objectives and goals for which machine learning methods are frequently adopted

  • One rule, zero rule, and cubist
  • RIPPER or Repeated Incremental Pruning to Produce Error Reduction
  • Random forest, boosting, and AdaBoost
  • Gradient Boosted Regression Trees and the Stacked Generalization
  • Gradient Boosting Machines and Bootstrapped Aggregation
  • Convolutional Neural Networks and Stacked Autoencoders
  • Deep Boltzmann Machine and Deep Belief Networks
  • Projection Pursuit and Sammon Mapping
  • Principal Component Analysis and Partial Least Square Discriminant Analysis
  • Quadratic Discriminant Analysis and Flexible Discriminant Analysis
  • Partial Least Squares Regression and Multidimensional Scaling
  • Principal Component Regression and Mixture Discriminant Analysis
  • Regularized Discriminant Analysis and Linear Discriminant Analysis
  • K means and K medians
  • Expectation Maximization and Hierarchical Clustering
  • Ridge Regression and Elastic Net
  • Least Angle Regression and the LASSO or Least Absolute Shrinkage and Selection Operator
  • Hopfield Network and perception
  • Black Propagation and Radian Basis Function Network
  • Naive Bayes and Bayesian Network
  • Averaged One Dependents Estimators and Gaussian Naive Bayes
  • Bayesian Belief Networks and Multinomial Naive Bayes
  • Logistic, stepwise, and linear regression
  • Locally Estimated Scatterplot Smoothing and Ordinary Least Squares Regression
  • Multivariate Adaptive Regression Splines
  • MS, C 4.5, C 5.0, and Decision stump
  • Conditional Decision Trees and Iterative Dichotomiser 3
  • Chi-squared Automatic Interaction Detection
  • Classification and regression tree
  • K Nearest Neighbour and Self Organising Map
  • Locally Weighted Learning and Learning Vector Quantization

You can get a complete technical explanation and tips associated with the usage of these algorithms from our website. The selection of your thesis topic for machine learning becomes easier than before when you look into the various aspects of these algorithms and get to choose the best one based on your interests and needs. For this purpose, you can connect with us. We are here to assist you by giving proper expert consultation support for topic selection and allocating a highly qualified team of engineers to carry out your project successfully. Let us now talk about linear regression in detail

What is the process of linear regression?

The following are the three important stages in the process of linear regression analysis

  • Data correlation and directionality analysis
  • Model estimation based on linear fitting
  • Estimation of validity and assessing the merits of the model

It is important that certain characteristic features are inherent in a model for the proper working of an algorithm. Feature engineering is the process by which essential features from raw data are obtained for the better functioning of an algorithm. With the most appropriate features extracted the algorithms become simple. Thus as a result accuracy of results is obtained even in the case of nonideal algorithms. What are the objectives of feature engineering?

  • Preparation of input data for Better compatibility with the chosen machine learning algorithm
  • Enhancement of the efficiency and working of machine learning models

With these goals, feature engineering becomes one of the important aspects of a machine learning research project. Talk to engineers for more details on the methods and algorithms used in extracting the necessary features.  What are the techniques used in feature engineering? 

  • Imputation and binning
  • Log transform and feature split
  • Outliers handling and grouping functions
  • One hot encoding and scaling
  • Data extraction

Usually, we provide practical explanations in easy to understand words to our customers so that all their doubts are cleared even before they start their research. For this purpose, we make use of the real-time implemented models and our successful projects . Check out our website for all our machine learning project details. Let us now talk about hybrid machine learning models.

HYBRID MACHINE LEARNING MODELS

  • When the machine learning methods are integrated with other methods such as optimization approaches, soft computing, and so on drastic improvement can be observed in the machine learning model.
  • The ensemble methods are the culmination of grouping methods like boosting and bagging in case of multiple machine learning classifiers.

Our experts claim that the success of machine learning is dependent on ensemble and hybrid methods advancements. In this regard let us have a look into some of the hybrid methods below

  • NBTree and functional tree
  • Hybrid fuzzy with decision tree
  • Logistic model tree and hybrid hoeffding tree

Most importantly these hybrid models and ensemble-based approaches in machine learning are on a rising scale and our technical team always stays updated about such novelties. So we are highly capable of providing you with the best support in all thesis topics for machine learning. Let us now look into the metrics used in analyzing the performance of machine learning models

Performance analysis of machine learning

Confusion metrics are prominently used for analyzing the machine learning models. The following are the fundamental terms associated with machine learning confusion metrics

  • Contradiction of actual and predicted classes
  • Correct prediction of negative values consisting of ‘no’ results for both actual and predicted classes
  • Correct prediction of positive values consisting of ‘yes’ results for both actual and prediction classes

Using these fundamental parameters the essential values for calculation of efficiency and performance of the machine learning models are obtained as follows.

  • Procession is considered as the ratio between the number of accurate positives predicted and the total number of positives claimed
  • Recall is the ratio of all The true positive rate (in actual class being yes)
  • F1 Score is the average between recall and precision hence taking into account all the false positives and false negatives
  • Uneven distribution of classes require F1 Score to be evaluated than the accuracy, about which we will discuss below
  • Accuracy can be considered in cases of similar false positives and false negatives.
  • For different cost values of false positives and false negatives, it is recommended that you choose to recall and precision for performance evaluation
  • Accuracy is the ratio between correct predictions and the total observations
  • Also accuracy is considered as one of the most important and intuitive measures for analyzing the machine learning system performance

It becomes significant to note here that at thesis topics for machine learning ; our experts have produced excellent results in all these performance metrics. Contact our experts’ team for more details on the approaches that are considered to produce such the best outcomes . We work 24/7 to assist you.

thesis ideas for machine learning

Opening Hours

  • Mon-Sat 09.00 am – 6.30 pm
  • Lunch Time 12.30 pm – 01.30 pm
  • Break Time 04.00 pm – 04.30 pm
  • 18 years service excellence
  • 40+ country reach
  • 36+ university mou
  • 194+ college mou
  • 6000+ happy customers
  • 100+ employees
  • 240+ writers
  • 60+ developers
  • 45+ researchers
  • 540+ Journal tieup

Payment Options

money gram

Our Clients

thesis ideas for machine learning

Social Links

thesis ideas for machine learning

  • Terms of Use

thesis ideas for machine learning

Opening Time

thesis ideas for machine learning

Closing Time

  • We follow Indian time zone

award1

  • Studienangebot
  • Vorlesungsverzeichnis
  • Bewerbung & Zulassung
  • Schnupperstudium
  • Gasthörerstudium
  • Beratung und Information
  • Profil Universität Hildesheim
  • Campusleben
  • Studienfinanzierung
  • Studienbeiträge
  • Zentrale Studienberatung
  • International Office
  • Studieren mit Familie
  • Studieren mit Behinderung
  • Termine & Fristen
  • Anfahrts- und Lageplan
  • Fachbereiche
  • Vorlesungsverzeichnis LSF
  • Studien- & Prüf.-Ordnungen
  • Einschreibung, Rückmeldung, Prüfungen...
  • Universitätsbibliothek
  • Rechenzentrum
  • Studierenden-Vertretung
  • Jobs und Praktika
  • Information & Beratung
  • Gleichstellungsbüro
  • Konferenz wiss. Mitarbeiterinnen
  • Personalrat
  • Rat & Hilfe bei Suchtfragen
  • Schwerbehindertenvertretung
  • Lokale Angebote
  • Übergreifende Angebote
  • Gremientermine
  • Elektr. Verkündungsblatt
  • Arbeiten mit Familie
  • Veranstaltungen
  • Neuigkeiten
  • UniMagazin/UniJournal
  • Stiftung Uni Hildesheim
  • Fundraising
  • Universitätsgesellschaft
  • Ehrenmitglieder
  • Alumni-Vereine
  • Gasthörer-Studium
  • Stellenmarkt
  • Universitätsverlag
  • Universitätsarchiv
  • Pressestelle
  • Anfahrts- & Lageplan

Logo Stiftung Universität Hildesheim

  • Degree Programs
  • Bachelor/Master
  • Publications
  • ISMLL Cluster
  • Travel Information
  • Legal notice

Dekobild im Seitenkopf ISMLL

  • University of Hildesheim »
  • Department of Mathematics, Natural Science, Economics and Computer Science »
  • Institute of Computer Science »
  • Information Systems and Machine Learning Lab (ISMLL)

Master Thesis at ISMLL

  • How to Start a Master Thesis
  • Available Thesis Topics
  • Current Master Theses
  • Past Master Theses

Available Supervisors

Available master thesis topics at ismll.

Go is an old two-player board game with origin in Asia. The game is easy to learn but hard to master which makes it so popular all over the world. Go is also interesting for artificial intelligence research. For Chess already programs exist that can beat any human player. For Go the strongest AI players are far away from beating human experts. There are two main reasons: i) the search space of Go is extremely large since many move options exist and ii) moves can have an important long-term effect such that no good heuristic was found that can evaluate a board state. Thus, techniques that led to success in games like Chess, cannot be applied successfully in Go.

Strong state of the art Go computers are using a combination of Monte Carlo Tree Search (a heuristic search algorithm) and move prediction algorithms that are pruning the search space and guide the search. The aim of this thesis is to compare these move prediction algorithms. For that reason, two algorithms need to be integrated into an existing Go computer (e.g. Fuego ) and the better move predictor will be estimated by a direct contest.

Learning the structure of Bayesian Networks from data is one of the core tasks in Bayesian Networks that has been addressed by several algorithms recently. Starting from the CGNM/BN implementation in Java, that supports basic tasks as IO, inferencing and parameter learning, first some simple and fast algorithms like K2 should be implemented. Building on that, two more complex algorithms (PC and GES) should be implemented efficienlty. The performance of these algorithms should be evaluated in terms of quality of the solution found as well as runtime on real life datasets. Finally, synthetic datasets created from synthetic BNs should be used to assess the algorithms under controllable conditions.

Many web-scale applications captures data that are multi-way and are represented through a multidimensional arrays or Tensor. Example of such applications are Social network analysis (user, user and time), Recommender Systems (user, product, time and auxiliary relations), and Topic modeling (topic, document, word). Tensor factorization is used as a tool to learn latent features from the mutli-way dataset. The scalability of these Tensor Factorization is an important aspect, as these tensor easily exhaust the memory of a single node. The existing methods partitions the data among workers, but usually, require to hold latent matrices in memory.

In this thesis you have to explore data partitioning options, such that each worker should not load the complete latent matrices. You have to investigate the scalability scalability in terms of the datasizes, the number of nodes, and the convergence behavior

  • 1. Shin, Kijung, Lee Sael, and U. Kang. "Fully scalable methods for distributed tensor factorization." IEEE Transactions on Knowledge and Data Engineering, 2017.
  • 2. Beutel, Alex, et al. "Flexifact: Scalable flexible factorization of coupled tensors on hadoop." Proceedings of the 2014 SIAM International Conference on Data Mining. Society for Industrial and Applied Mathematics, 2014.
  • 3. Oh, Jinoh, et al. "S-hot: Scalable high-order tucker decomposition." Proceedings of the Tenth ACM International Conference on Web Search and Data Mining. ACM, 2017.

The reinforcement learning (RL) has seen wide up-take by the research community as well as the industry. The RL setting consists of an agent which interacts with the environment and learns a policy that is optimal to solve a certain problem. If the environment is complex the learning process could slow down, requiring significantly amount of resources to reach the optimal solution. In this direction the speedup of the learning process could be achieved by using parallel and distributed concepts as well as enhancing the exploration capacity of the agent.

In this thesis you have to develop a scalable RL framework that exploits the parallelism to speed up the learning process and diversify the exploration capacity of the agents.

  • 1. Dimakopoulou, Maria and Van Roy, Benjamin. "Coordinated Exploration in Concurrent Reinforcement Learning" Proceedings of the 35th International Conference on Machine Learning, 2018.
  • 2. Volodymyr Mnih, et al. "Asynchronous Methods for Deep Reinforcement Learning" Proceedings of The 33rd International Conference on Machine Learning, 2016

The big data is generally attributed with the large number of data instances and features. This increase in the size of data is attributed to sophisticated means of capturing data via sensors or growth in online retail services. In both cases the size of data ranges between terabytes to petabytes with billions of features. However, the data features that significantly contribute towards learning are very sparse. The sparse machine learning models are typically used i.e. LASSO. The existing sparse models that work in distributed settings usually start with large models and have a high communication cost. The high communication cost dominates the execution time in distributed settings.

In literature, many screening algorithms[2] exist that reduce dimensionality of the model. But these algorithms are applied as a preprocessing step and are sequential. In this thesis you have to explore the existing screening algorithms by implement and evaluating them in a distributed setting. Interesting aspect would be to explore the compactness of these models, its impact on the communication cost and the convergence. A novel aspect would be to design an efficient distributed optimization algorithm, that exploits compactness of the model.

  • 1. Li, Qingyang, et al. "Parallel Lasso Screening for Big Data Optimization." Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2016.
  • 2. L. E. Ghaoui, V. Viallon, and T. Rabbani. Safe feature elimination for the lasso and sparse supervised learning problems. Pacific Journal of Optimization, (8):667–698, 2012.
  • 3. R. Tibshirani, J. Bien, J. Friedman, T. Hastie, N. Simon, J. Taylor, and R. J. Tibshirani. Strong rules for discarding predictors in lasso-type problems. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 74(2):245–266, 2012.

Learning in large scale distributed networks is a challenging task, because centralization of data is not feasible. Any data change must be reported to the central server, and it might alter the results due to distributed, dynamically changing and unreliable nature of such ad-hoc networks. Therefore, the goal is to develop distributed mining algorithms that are communication efficient, scalable, asynchronous, and robust to user mobility, which achieve accuracy as close as possible to centralized solutions.

One particular approach is to share very compact but very significant data or models among neighboring compute nodes. Idea is to extract least but effective data or models, which reduces the cost of exchange (communication) and increases the prediction accuracy of the neighboring recipient and hence of overall network. In this regard, this Master thesis would exploit emergence of frequent nearest neighbors in high dimensional data. If such data instances emerge as centroids of local clusters, they are expected to carry useful information about this local neighborhood. Hence, exchanging only such instances, can increase overall prediction accuracy while keep communication load under control.

Your task is to understand the problem and then analyze the baseline approaches for data or model exchange in distributed scenarios. In the end, a working implementation and concrete experimental results are expected. Novel additions to the state-of-art will be highly appreciated during evaluation.

  • Miloš Radovanović, Alexandros Nanopoulos, and Mirjana Ivanović. 2009. Nearest neighbors in high-dimensional data: the emergence and influence of hubs. In Proceedings of the 26th Annual International Conference on Machine Learning (ICML '09)
  • Hock Hee Ang, Vivekanand Gopalkrishnan, Steven C. Hoi, and Wee Keong Ng. 2008. Cascade RSVM in Peer-to-Peer Networks. In Proceedings of the 2008 European Conference on Machine Learning and Knowledge Discovery in Databases - Part I (ECML PKDD '08)

Automatic music transcription (AMT) or dictation is a fundamental problem in the Music Information Retrieval (MIR) area, and is described as the process of converting an acoustic signal into symbolic notations or an equivalent representation. With human beings, this skill is often expected to be acquired by music students after a couple of years into their college degree. Nevertheless, current state-of-the-art algorithms in AMT still have a lot more room for improvement [3] ever since [1] first attempted automatic duet transcription over three decades ago and [2] presented the spectral peak picking for piano note detection. Obtaining a standardized parametric representation of music is important for content-based retrieval, and can aid in music analysis. The specific problem targeted by this thesis is polyphonic music transcription, namely multi-pitch estimation, also referred to as multiple-F0 or multi-pitch detection, which may be considered a core problem in automatic transcription. The challenge presented in this problem is the concurrent presence of distinct notes, leading to interference amongst the corresponding harmonics

  • [1] Dixon, Simon. "On the computer recognition of solo piano music." Proceedings of Australasian computer music conference. 2000.
  • [2] Moorer, James A. "On the transcription of musical sound by computer." Computer Music Journal (1977): 32-38.
  • [3] Sigtia, Siddharth, Emmanouil Benetos, and Simon Dixon. "An end-to-end neural network for polyphonic piano music transcription." IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP) 24.5 (2016): 927-939.

Parking availability prediction is rapidly gaining interest within the community as an operationally cheap approach to identifying empty parking locations. Parking locations accommodate multiple vehicles and are rarely completely occupied which makes it difficult to predict occupied locations without the augmentation of external data, as the data becomes highly imbalanced. Existing forecasting models neither encapsulate the heterogeneous modes/types of parking data, nor can handle sparse measurements. In this thesis, your task will be to develop a sequence-to-sequence learning framework that addresses the issue of occupancy forecasting. Throught this project, you will build on top of existing methods pertaining to multi-step forecasting, and deliver a model that is competitive with what has been recently done.

  • [1] Sutskever, Ilya, Oriol Vinyals, and Quoc V. Le. "Sequence to sequence learning with neural networks." Advances in neural information processing systems. 2014.
  • [2] Lin, Trista, et al. ”A survey of smart parking solutions.” parking 1524 (2017): 9050.: 32-38.
  • [3] Chen, Xiao. ”Parking occupancy prediction and pattern analysis.” Dept. Comput. Sci., Stanford Univ., Stanford, CA, USA, Tech. Rep. CS229-2014 (2014).

Early diagnosis of disease can increase the chances of finding a cure by providing the proper preventive treatment. With the integration of information technology in the medical domain, electronic health data of patients become readily available for processing. In order to benefit from the raw signals,data and meta-data of the observations, it is critical to learn a rich representation of the data and utilizing the obtained feature representations in a machine learning model that aids in the predictive analysis task. The continuous success of neural networks in machine learning is being recently investigated in health care [1] , [2] , through LSTM’s and recurrent neural networks due to the time-dependent nature of the data. To this end, the objective of this thesis is to develop a strong neural network that learns a meaningful representation of the data and is able to predict disease onset with competitive accuracy as compared to existing baselines.

  • [1] Razavian, Narges, Jake Marcus, and David Sontag. "Multi-task prediction of disease onsets from longitudinal laboratory tests." Machine Learning for Healthcare Conference. 2016.
  • [2] Choi, Edward, et al. "Doctor ai: Predicting clinical events via recurrent neural networks." Machine Learning for Healthcare Conference. 2016.

The notion of attention in machine learning approaches proved successful in domain adaptation, time series forecasting, and image classification. However, all attention techniques are designed on an instance level. In this thesis, you will investigate the a new form of attention, by means of dataset meta-feature similarity. The idea in a nutshell is that, the performance of a target model for a given hyper-parameter configuration depends on the similarity between the target dataset itself and the support dataset, as well as the corrsesponding surrogate value of support dataset .

  • [1] Vinyals, O., Blundell, C., Lillicrap, T., and Wierstra, D. (2016). Matching networks for one shot learning. In Advances in neural information processing systems (pp. 3630-3638).
  • [2] Jomaa, H. S., Schmidt-Thieme, L., and Grabocka, J. (2019). Dataset2vec: Learning dataset meta-features. arXiv preprint arXiv:1905.11063.

Dividing the reinforcement learning task into a hierarchy of problems is one of the solutions provided for environment with a large state space. Some of the existing methods focus on decomposing the reward and learning separate value functions per component [1]. Other methods, referred to as Feudal Learning, split the learning agent into modules, namely a master and slave. The master learns abstract actions and propagates them to the worker, which receivs an inherent reward if he comlies with the master, and aims to maximize the discounted reward [2]. In this thesis, the objective is to focus on another aspect of hierarchical learning, Compositonal Q-learning [3]. Simply put, in compositional Q-learning, the agent aims to maximize the reward of elemental tasks, and composite tasks, which are an ordered sequence of elemental tasks. This type of Learning is interesting in atari games as well as for robotic control, both of which have readliy available source code inthe OpenAI repository[4].

  • [1] Van Seijen, Harm, et al. "Hybrid reward architecture for reinforcement learning." Advances in Neural Information Processing Systems. 2017.
  • [2] Vezhnevets, Alexander Sasha, et al. "Feudal networks for hierarchical reinforcement learning." arXiv preprint arXiv:1703.01161 (2017).
  • [3] Tham, Chen K., and Richard W. Prager. "A modular q-learning architecture for manipulator task decomposition." Machine Learning Proceedings 1994. 1994. 309-317.
  • [4] https://github.com/openai/gym

One of the existing challenges in reinforcement learning is exploration. After an agent is trained, it tends to stick to the actions that have proved to maximize its previous rewards, without any incentive to explore other, possibly more rewarding actions. Several approaches have been proposed to boost exploration which inlcude noise injection into the action space[1], noise injection into the parameter space[2], curiosity drive exploration in which curiosity leads to a higher intrinsic reward [3]. In this thesis, you will focus on uncertainty-driven exploration in discrete action-spaces. In this method, every action is associated with a degree of uncertainty, and the best action is selected based on a modified reward that accommodates the level of uncertainty. Similar work has been investigated within the the deep learning community, and proved to improve the overall performance[4].

  • [1] Osband, Ian, et al. "Deep exploration via bootstrapped DQN." Advances in neural information processing systems. 2016.
  • [2] Plappert, Matthias, et al. "Parameter space noise for exploration." arXiv preprint arXiv:1706.01905 (2017).
  • [3] Pathak, Deepak, et al. "Curiosity-driven exploration by self-supervised prediction." International Conference on Machine Learning (ICML). Vol. 2017. 2017.
  • [4] Kendall, Alex, and Yarin Gal. "What uncertainties do we need in bayesian deep learning for computer vision?." Advances in neural information processing systems. 2017.

Choosing the set of optimal hyperparameters is an important step in tuning a learning architecture. Existing methods inlude the naive grid-search or a random search over a given space of values. More statistical methods include Bayesian optimization [1,2],as well as gradient-based tuning [3]. However, with the recent evolved interest in reinforcement learning, it is now feasible to add a controller which is trained simultaneously with the proposed architecture, while the agent adaptively chooses new hyperparameter from a continuous action space[4]. In this thesis, you will work on formalizing the reinforcement learning task, by defining the state and reward associated with every action, (hyperparameter). At the first level, you will focus on simple models with a small number of hyperparameter, for example: regularization factor of an SVM, and then progress to adaptively tuning neural networks by selecting the right dropout values, L2-regularization factor, etc.

  • [1] Hutter, Frank, Holger H. Hoos, and Kevin Leyton-Brown. "Sequential model-based optimization for general algorithm configuration." International Conference on Learning and Intelligent Optimization. Springer, Berlin, Heidelberg, 2011.
  • [2] Snoek, Jasper, Hugo Larochelle, and Ryan P. Adams. "Practical bayesian optimization of machine learning algorithms." Advances in neural information processing systems. 2012.
  • [3] Bengio, Yoshua. "Gradient-based optimization of hyperparameters." Neural computation 12.8 (2000): 1889-1900.
  • [4] Lillicrap, Timothy P., et al. "Continuous control with deep reinforcement learning." arXiv preprint arXiv:1509.02971 (2015).

During the design of neural networks, many scientists and developers waste a huge amount of time and energy by testing several different kinds of configurations or by attempting to improve current existing models. In all of these situations, it often requires training the network from scratch, consuming a huge amount of time. To solve this problem, many methods attempt to use already existing models as a starting point and make small changes at a time. One of the current methods is presented by T. Chen et al. This method is known as Net2Net. By using knowledge transfer techniques based on function preserving transformations to replicate nodes or layers from a pre-trained-network, and training from that point, the network is guaranteed to either improve or maintain it's efficiency. This project aims to tackle this problem by formulating new methods of knowledge transfer. Your task is to understand the problem, formulate a new method while seeking to improve the state-of-the-art.

  • 1) Chen, Tianqi, Ian Goodfellow, and Jonathon Shlens. "Net2net: Accelerating learning via knowledge transfer." arXiv preprint arXiv:1511.05641 (2015).

In many machine learning problems, researchers often find themselves with the problem regarding incomplete or missing data. Several methods try to overcome this issue by using "complete" and previously annotated data while feeding models with only part of it. One of these scenarios is human motion classification where we are able to find databases with complete motion data information, but we might have only information from the arms (for example) in a hypothetical real-life application. To tackle this problem many researchers have used Deep Learning models to model, classify and even reconstruct the full body of the subject, however, only a few combine these tasks. In this project, your task is to understand the problem and the varied approaches and compare two of the best state-of-the-art techniques. In the end, an implementation and experiments should be presented. Also, a novel implementation is highly desirable.

  • 1) Catalin Ionescu, Dragos Papava, Vlad Olaru and Cristian Sminchisescu, Human3.6M: Large Scale Datasets and Predictive Methods for 3D Human Sensing in Natural Environments, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, No. 7, July 2014
  • 2) Hanke, Sten, Matthieu Geist, and Andreas Holzinger. "Human Activity Recognition Using Recurrent Neural Networks." Machine Learning and Knowledge Extraction: First IFIP TC 5, WG 8.4, 8.9, 12.9 International Cross-Domain Conference, CD-MAKE 2017, Reggio, Italy, August 29–September 1, 2017, Proceedings. Vol. 10410. Springer, 2017.
  • 3) Ronao, Charissa Ann, and Sung-Bae Cho. "Human activity recognition with smartphone sensors using deep learning neural networks." Expert Systems with Applications 59 (2016): 235-244.

One of the biggest challenges in artificial intelligence is creating computer agents that are capable of making decisions as a human. A small sub-scenario of this area is simulating a human-player on a computer game. Several simple games have been already modeled, but complex and recent games are harder to model due to the high amount of information and features necessary to extract. Two different papers [1,2] implement two deep learning models that receive visual feedback and outputs actions as the buttons press. The models are trained based on human play sessions data, which includes the image on the screen and the buttons currently being pressed. Your job is to analyze different state-of-the-art papers, compare them and attempt to improve their current results. In the end, an implementation and experimental results should be presented. A novel improvement is highly desirable.

  • 1) Chen, Zhao, and Darvin Yi. "The Game Imitation: Deep Supervised Convolutional Networks for Quick Video Game AI." arXiv preprint arXiv:1702.05663 (2017).
  • 2) Chen, Zhao, and Darvin Yi. "The Game Imitation: Deep Supervised Convolutional Networks for Quick Video Game AI." arXiv preprint arXiv:1702.05663 (2017).

Recently several novel adaptive schemes for Coordinate Descent have emerged. For example [1] proposes to change the actual coordinate system over time such that it adapts to the loss surface. Another idea is to adaptively change the relative update frequencies of the coordinates [2,3], which is useful when the loss is very sensitive to some parameters, but very insensitive to others. Typically such adaptive methods yield superior performance, but suffer from scalability issues. The main objective in this thesis is to apply and specialize such adaptive optimization techniques to typical Machine Learning models as SVMs or DNNs. A secondary objective could be to create a Reinforcement-Learning Agent that learns how to select the best relative update frequencies.

  • Loshchilov, I., Schoenauer, M., & Sebag, M. (2011). Adaptive Coordinate Descent. In Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation (pp. 885–892). https://doi.org/10.1145/2001576.2001697
  • Glasmachers, T., & Dogan, U. (2013). Accelerated Coordinate Descent with Adaptive Coordinate Frequencies. In Asian Conference on Machine Learning (pp. 72–86). http://proceedings.mlr.press/v29/Glasmachers13.html
  • Perekrestenko, D., Cevher, V., & Jaggi, M. (2017). Faster Coordinate Descent via Adaptive Importance Sampling. In Artificial Intelligence and Statistics (pp. 869–877). http://proceedings.mlr.press/v54/perekrestenko17a.html

Game theoretical oriented approaches are becoming more and more popular in Machine Learning. In the n-player game, each agent (usually parametrized bz a neural network) tries to minimize its own objective function. The objective functions of a pair of agents can be opposing - e.g. agent A tries to minimize f, and agent B tries to minimize -f - (competitive case), they can be aligned - e.g. (cooperative case) or something in between. The goal of the experimenter is to find a Nash-equilibrium, i.e. a stable point in the sense that no agent can gain from changing its strategy. A prime example of a two player game are Generative Adversarial Nets (GANs) [3]. Here the two agents are the discriminator network, which tries to distinguish between real data and data generated by the second agent, the generator network whoose goal is to fool the discriminator. One of the biggest challenges in training GANs is the instability of the training procedure. It can be shown that the standard procedure of training GANs called Gradient Descent Ascent - simply alternatingly applying gradient updates to the generator/discriminator - does not necessarily converge. Recently some new ideas [1,2] have emerged to compute the Nash-equilibria in a more stable manner, for example Competitive Gradient Descent. [1] The goal of the master thesis is to understand and implement Competitive Gradient Descent [1], applying it to a novel setting and investigating possible improvements.

  • Schäfer, Florian, and Anima Anandkumar. “Competitive Gradient Descent.” ArXiv:1905.12103 [Cs, Math], May 28, 2019. http://arxiv.org/abs/1905.12103
  • Balduzzi, David, Sebastien Racaniere, James Martens, Jakob Foerster, Karl Tuyls, and Thore Graepel. “The Mechanics of N-Player Differentiable Games.” In International Conference on Machine Learning, 354–63, 2018. http://proceedings.mlr.press/v80/balduzzi18a.html
  • Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. “Generative Adversarial Nets.” In Advances in Neural Information Processing Systems 2014. http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf .
  • The inner objective: Given fixed hyperparameters h, find the optimal model weights w * (h), i.e. those which minimizes the loss on the training data
  • The outer objective: Find the hyperparameters h * for which the model with weights w * (h * ) has minimal loss on the validation data
  • Fu, Jie, Hongyin Luo, Jiashi Feng, Kian Hsiang Low, and Tat-Seng Chua “DrMAD: Distilling Reverse-Mode Automatic Differentiation for Optimizing Hyperparameters of Deep Neural Networks.” ArXiv:1601.00917 [Cs], January 5, 2016. http://arxiv.org/abs/1601.00917
  • Pedregosa, Fabian. “Hyperparameter Optimization with Approximate Gradient.” In International Conference on Machine Learning, 737–46, 2016. http://proceedings.mlr.press/v48/pedregosa16.html
  • Franceschi, Luca, Michele Donini, Paolo Frasconi, and Massimiliano Pontil. “Forward and Reverse Gradient-Based Hyperparameter Optimization.” ArXiv:1703.01785 [Stat], March 6, 2017. http://arxiv.org/abs/1703.01785
  • Franceschi, Luca, Paolo Frasconi, Saverio Salzo, Riccardo Grazzi, and Massimiliano Pontil. “Bilevel Programming for Hyperparameter Optimization and Meta-Learning.” In International Conference on Machine Learning, 1568–77, 2018. http://proceedings.mlr.press/v80/franceschi18a.html

Recent advances in Gradient based Hyperparameter Optimization allow the efficient optmization of large amounts of hyperparameters. Yet many questions are unexplored: How useful is it to optimize many hyperparameters? How to void overfitting on the validation data? What are interesting new model architectures to experiment with? In this thesis, you will perform exploratory work with gradient based hyperparameter optimization. A strong background in linear algebra and differential calculus is required.

  • Lorraine, Jonathan, Paul Vicol, and David Duvenaud. "Optimizing Millions of Hyperparameters by Implicit Differentiation." arXiv preprint arXiv:1911.02590 (2019)

Traditional recommender systems assume that the user profile and item attributes are static factors. This assumption doesn't consider the temporal dynamics that affect the user profile or the items e.g. user interest in an item may change based on the season of the year like watching Christmas movies in summer and winter. This thesis will identify the potential approaches to build a dynamic recommender system model that can predict users interest in items based on their historical interactions and behaviors.

  • 1) Chao-Yuan Wu, Amr Ahmed, Alex Beutel, Alexander J. Smola, and How Jing. 2017. Recurrent Recommender Networks. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining (WSDM '17). ACM, New York, NY, USA, 495-503. DOI: https://doi.org/10.1145/3018661.3018689
  • 2) Yongfeng Zhang, Min Zhang, Yi Zhang, Guokun Lai, Yiqun Liu, Honghui Zhang, and Shaoping Ma. 2015. Daily-Aware Personalized Recommendation based on Feature-Level Time Series Analysis. In Proceedings of the 24th International Conference on World Wide Web (WWW '15). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, Switzerland, 1373-1383. DOI: https://doi.org/10.1145/2736277.2741087

The task of classifying multi-relational data spans a widerange of domains such as document classification in cita-tion networks, classification of emails and protein labeling inproteins interaction graphs. Current state-of-art classifica-tion models rely on learning per-entity latent representationsby mining the whole structure of the relations graph, how-ever, they still face two major problems. Firstly, it is verychallenging to generate expressive latent representations insparse multi-relational settings with implicit feedback rela-tions as there is very little information per-entity. Secondly,for entities with structured properties such as titles and ab-stracts (text) in documents, models have to be modified ad-hoc. This thesis will identify the potential deep learning approaches to predict entities labels based on their relations and interaction information such as predicting document's category in citation networks or prediciting user interest in social networks.

  • Cai, Hongyun, Vincent W. Zheng, and Kevin Chang. "A comprehensive survey of graph embedding: problems, techniques and applications." IEEE Transactions on Knowledge and Data Engineering (2018).
  • Kipf, Thomas N., and Max Welling. "Semi-supervised classification with graph convolutional networks." arXiv preprint arXiv:1609.02907 (2016).

Siamese networks have grown quite popular in the field of Machine Learning specially with research about object tracking and text equivalence, since they are able to compare samples and estimate their similarity. The hypothesis is that this architecture can be proven as a useful tool if you want to compare similarities between features from different data-sets becoming an asset for performing Model Agnostic Meta-Learning across tasks with different schemas.

In this thesis you will implement a siamese network to improve the performance of Model Agnostic Meta-Learning Methods that work across tasks with different schemas.

  • 1. Zhang, Zhipeng, and Houwen Peng. "Deeper and wider siamese networks for real-time visual tracking." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019.
  • 2. Mueller, J., Thyagarajan, A. (2016, March). Siamese recurrent architectures for learning sentence similarity. In thirtieth AAAI conference on artificial intelligence.
  • 3. Brinkmeyer, L., Drumond, R. R., Scholz, R., Grabocka, J., Schmidt-Thieme, L. (2019). Chameleon: Learning model initializations across tasks with different schemas.
  • 4. Drumond, Rafael Rego, et al. "HIDRA: Head Initialization across Dynamic targets for Robust Architectures." Proceedings of the 2020 SIAM International Conference on Data Mining. Society for Industrial and Applied Mathematics, 2020.

The classical vehicle routing problem (VRP) designs optimal delivery routes where each vehicle only travels one route, each vehicle has the same characteristics and there is only one central depot. The goal of the VRP is to find a set of least-cost vehicle routes such that each customer is visited exactly once by one vehicle, each vehicle starts and ends its route at the depot, and the capacity of the vehicles is not exceeded. This classical VRP has been extended in many ways by introducing additional real-life aspects or characteristics, resulting in a large number of variants of the VRP. In the standard periodic vehicle routing problem (PVRP), customers require visits on one or more days within a planning period, and there are a set of feasible visit options for each customer. Customers must be assigned to a feasible visit option and a VRP is solved for each day in the planning period. The typical objective is to minimize the total distance traveled over the planning period. The PVRP arises in a diverse array of applications, from the collection of recyclables, to the routing of home healthcare nurses, to the collection of data in wireless networks. The wide applicability and versatility of the problem, coupled with the problem’s difficulty, has led to continuing interest and research efforts.

In computer science, evolutionary computation is a family of algorithms for global optimization inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms. In technical terms, they are a family of population-based trial and error problem solvers with a metaheuristic or stochastic optimization character. In evolutionary computation, an initial set of candidate solutions is generated and iteratively updated. Each new generation is produced by stochastically removing less desired solutions, and introducing small random changes. In biological terminology, a population of solutions is subjected to natural selection (or artificial selection) and mutation. As a result, the population will gradually evolve to increase in fitness, in this case the chosen fitness function of the algorithm. Evolutionary computation techniques can produce highly optimized solutions in a wide range of problem settings, making them popular in computer science.

  • 1. Braekers, Kris, Katrien Ramaekers, and Inneke Van Nieuwenhuyse. “The Vehicle Routing Problem: State of the Art Classification and Review.” Computers and Industrial Engineering 99 (September 1, 2016): 300–313. https://doi.org/10.1016/j.cie.2015.12.007.
  • 2. Campbell, Ann Melissa, and Jill Hardin Wilson. “Forty Years of Periodic Vehicle Routing.” Networks 63, no. 1 (January 2014): 2–15. https://doi.org/10.1002/net.21527.
  • 3. Zhou, Aimin, Bo-Yang Qu, Hui Li, Shi-Zheng Zhao, Ponnuthurai Nagaratnam Suganthan, and Qingfu Zhang. “Multiobjective Evolutionary Algorithms: A Survey of the State of the Art.” Swarm and Evolutionary Computation 1, no. 1 (March 1, 2011): 32–49. https://doi.org/10.1016/j.swevo.2011.03.001.
  • 4. Ombuki, Beatrice, Brian J. Ross, and Franklin Hanshar. “Multi-Objective Genetic Algorithms for Vehicle Routing Problem with Time Windows.” Applied Intelligence 24, no. 1 (February 1, 2006): 17–30. https://doi.org/10.1007/s10489-006-6926-z.

Reinforcement learning (RL) is an area of machine learning concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward. In the operations research and control literature, reinforcement learning is called approximate dynamic programming, or neuro-dynamic programming (NDP). It describes a recent methodology that deals with the approximate solution of large and complex dynamic programming (DP) problems. Accordingly NDP combines simulation, learning, approximation architectures (e.g., neural networks) and the central ideas of DP to break the curse of dimensionality that is typical of DP. Therefore NDP could be a promising approach to solve large-scale and complex VRP.

  • 3. Sutton, Richard S, and Andrew G Barto. “Reinforcement Learning: An Introduction,”; https://web.stanford.edu/class/psych209/Readings/SuttonBartoIPRLBook2ndEd.pdf
  • 4. Secomandi, Nicola. “Comparing Neuro-Dynamic Programming Algorithms for the Vehicle Routing Problem with Stochastic Demands.” Computers and Operations Research 27, no. 11 (September 1, 2000): 1201–25. https://doi.org/10.1016/S0305-0548(99)00146-X.

In general reinforcement learning, action space is pre-defined (number of actions are fixed throughout the task). But, in real world situations, we may need to use new actions to accomplish the tasks. It can also be observed in the video games where the number of actions may increase with increase in level of the game.

As an example, consider a well known DAVE game. In level one, there are the actions {up}, {left}, {right}. Once, we enter level three, we will get an additional action of {shooting}. If we do not know that in future we may get new action, one has to learn new action, and may need to train the reinforcement learning algorithm from scratch with the additional action. This will take lot of time and the knowledge gained in previous levels goes waste.

In this thesis, a novel learning technique that can adapt the new action incrementally not from the scratch. The algorithm shall not lose the existing knowledge (ex: knowledge of previous levels in video games).

  • 1. Venkatesan, Rajasekar, and Meng Joo Er. "A novel progressive learning technique for multi-class classification." Neurocomputing 207 (2016): 310-321.
  • 2. Mnih, Volodymyr, et al. "Human-level control through deep reinforcement learning." Nature 518.7540 (2015): 529.

In general, while training a classification algorithm, we have true labels (Y) and predicted predicted labels (Y'). General trend of updating the parameters of the algorithm is by back propogating the loss which is average of absolute difference (or RMS loss). However, recent works shows that considereing top-k loss [1], average top-k loss [2] provide better results compared with traditional method.

In this work, one would analyse the results using above methods and find a better distribution over the losses for increasing the accuracy of the algorithm.

For information:

Top-k loss: It is the k^{th} maximum value in |Y - Y'| (similarly in RMS loss)

Average Top-k loss: It is the average of top-k values in |Y - Y'| (similarly in RMS loss)

  • 1. S. Shalev-Shwartz and Y. Wexler. Minimizing the maximal loss: How and why. In ICML, 2016
  • 2. Yanbo Fan, Siwei Lyu, Yiming Ying, Bao-Gang Hu. Learning with Average Top-k Loss. In NeruIPS, 2017
  • 3. Leonard Berrada, Andrew Zisserman, M. Pawan Kumar. Smooth Loss Functions for Deep Top-k Classification. in ICLR 2018

It is very difficult to automaticly extracting quantifying information out of medical images. Especially brain tumors can have unusal shapes so that common medical metrics do not work anymore (e.g. size of tumor). Their unpredictable appearance and shape make them challenging to be segmented in multi-modal brain imaging. Delineating brain tumor boundaries from magnetic resonance images is an essential task for the analysis of brain cancer. Several Machine Learning methods are used to automatically segment brain tumors.

If you are interested in other topics, please ask one of us directly.

edugate

Machine Learning Topics for Thesis

Aspiring machine learning writers in phdtopic.com will, carry on complete literature survey and find research gap by sharing novel topics on your area. More than 8000+scholars have successfully completed their research work by working with us you can get to know the quality of four work by working with us. Our research method in the context of a machine learning (ML) project is a model that overviews how our research is to be conducted, including the processes, methods, tools and data analysis methods that will be utilized. Here, we give a detailed breakdown of the key components that could form a part of a machine learning research methodology:

  • Problem Definition:
  • Problem Statement: We clearly define the issue that you are trying to solve with the machine learning methods.
  • Research Questions: Our research goals to answer the pose particular questions.
  • Hypotheses: To test through our investigation, we formulate hypotheses.
  • Literature Review:
  • Existing Solutions: In our work we address the issues, by survey present techniques or frameworks.
  • Theory and Background: For our problem field it is essential to interpret the theoretical background.
  • Data Collection:
  • Data Sources: From where we will gather our data, we find the sources
  • Data Sampling: To make sure of the representativeness of the data, we overview the sampling frameworks.
  • Data Privacy and Ethics: With data security laws and ethical guidelines, we make sure that compliance.
  • Data Preprocessing:
  • Data Cleaning: Our work handles missing data, noise and outliers to define the frameworks.
  • Feature Selection/Extraction: To decrease dimensionality or extracting novel features we detail the utilized methods.
  • Data Transformation: Normalization or encoding is a data transformation technique that defines any changes that are applied to the data.
  • Model Selection:
  • Algorithm Selection: We utilize machine learning methods to rationalize the selection process.
  • Baseline Models: For comparing more complicated techniques, we create baseline frameworks.
  • Model Training and Validation:
  • Training Process: Any cross validation method will be involved to overview how our frameworks will be trained.
  • Hyperparameter Tuning: For optimizing the framework parameters, we define the approach.
  • Model Evaluation Metrics: On the basis of problem type (Classification, Regression, Clustering, etc.) we choose relevant metrics to estimate the model achievement.
  • Experimental Design:
  • Test Bed: By setting up the surroundings, we carry out the experiments.
  • Reproducibility: By offering detailed settings and random seed information, we make sure that our experiments can be regenerated.
  • Statistical Tests: To Compare the frameworks or estimate findings, our work specifies any statistical tests.
  • Results and Discussion:
  • Performance Analysis: From various frameworks, we present and converse the findings we gained.
  • Impact of Findings: In our work, we identify the suggestions that converse the context of the issue field.
  • Comparison with state-of-the-Art: With the present state-of-the-art solutions, we compare our findings.
  • Limitations and Future Work:
  • Limitations: The limitations of our study and possible sources of bias will be recognized.
  • Future Directions: In future repetition, we propose the region for further research or enhancements.
  • Conclusions:
  • Summary: Our research outlines the main results and contributions.
  • Takeaways: From our study we provide key takeaways that involve practical applications if relevant.
  • Documentation and Sharing:
  • Code and Data Sharing: Our work converse how the code and datasets (if sharable) will be prepared vacant for others.
  • Publications: We publish or disseminate our research results by overviewing the plan.
  • Ethics and Social Impact:
  • Ethical Considerations: We address any ethical concerns that are appropriate to our research.
  • Societal Impact: We converse about the possible influence of our ML applications on society.

Our research approach will serve as a complete plan that guides our research activities. To make sure the validity and reliability of our findings, it is essential that it is well-structured, systematic and rigorous. During the research process, it will be flexible enough to adjust any unforeseen difficulty that arises.

Machine Learning Projects for Thesis

We have curated a list of innovative and interesting Machine Learning Topics for Thesis read it an explore more by getting our services.

  • TC and PPGL Detection Based on Machine Learning Models
  • ACCLAiM: Advancing the Practicality of MPI Collective Communication Autotuning Using Machine Learning
  • Securing virtual execution environments through machine learning-based intrusion detection
  • Improved Extreme Learning Machine Based on Deep Learning and Its Application in Handwritten Digits Recognition
  • An E-Learning System with Multifacial Emotion Recognition Using Supervised Machine Learning
  • Research and Application of Different Machine Learning Algorithms in ILPD Risk Prediction Model
  • Design of Human-computer Interaction System Using Gesture Recognition Algorithm from the Perspective of Machine Learning
  • Bluetooth Based Indoor Positioning Using Machine Learning Algorithms
  • Feature expansion of single dimensional time series data for machine learning classification
  • Smart equipment failure detection with machine learning applied to thermography inspection data in modern power systems
  • Password Strength Analysis and its Classification by Applying Machine Learning Based Techniques
  • A Machine Learning based Facial Expression and Emotion Recognition for Human Computer Interaction through Fuzzy Logic System
  • Pothole Detection Using Machine Learning Algorithms
  • Machine Learning to optimize Permanent Magnet Synchronous Machines
  • Fast HEVC intra coding algorithm based on machine learning and Laplacian Transparent Composite Model
  • Machine Learning for Optimum CT-Prediction for qPCR
  • Machine Learning Techniques Applied to Sensor Data Correction in Building Technologies
  • Activity Recognition and Localization based on UWB Indoor Positioning System and Machine Learning
  • Proposed machine learning system to predict and estimate impulse noise in OFDM communication system
  • Fast Extraction of Per-Unit-Length Parameters of Hybrid Copper-Graphene Interconnects via Generalized Knowledge Based Machine Learning

Thesis-State

  • Research Team
  • Feb 23, 2020

Top 60 Thesis/Dissertation Topics in Machine Learning and Artificial Intelligence of 2020

Selecting a focus area and topic for conducting your research and writing thesis/dissertation can be a problematic process-given constant transformation of academic landscape. This is the reason our team has investigated strategies, and come up with the best ones that you can utilize to select the most suited topic for yourself and ensure perfect trajectory to academic success. Our experts collaborated to categorically define six steps that can set you in the right direction. To read more about the process kindly check "Starting Research and Selecting Topic" section in our knowledge base.

We sent out invitation to 134 PhDs on-board with us, to submit the most valuable research topics in CS and IT for the year 2020. Our QA Team received more than thousand topics, which were then thoroughly discussed in expert groups to funnel out a list for our valuable readers. These topics are based mainly on the recent trends in awarded grants and national agendas, as well as potential focus areas that are expected to rise exponentially in next five years. So, please pay close attention to the topics, our team has also defined basic introduction and a strategic overview for each topic in the list, which can be provided to you upon request; kindly feel free to contact us in that regard using the contact provided at the end.

We have divided the list of topics into further focus areas to make the selection easier for you; however, the topics are interdisciplinary and in many ways the focus areas are overlapping. These topics are on high priority by reputable institutes. We strongly recommend to use this list as a source of inspiration. Copying the topics, as it is, is not recommended; although, you can change them to add your own flavor.

In addition, we provide some valuable resources with each focus area that may allow you to dig deeper and shape your understanding of the research topics.

Starting with Machine Learning and AI, a series of posts will be published for the best topics in the following focus areas:

Machine Learning and AI

Computer and Network Security

Big Data and IoT

Information Systems (Cloud and Database Management)

Health IT and Bioinformatics

Visual Computing (AR/VR/CGI)

Software Theory (OS and Architecture)

Neural Generative Models and Representation Learning for Information Retrieval

Controversy Identification Using Machine Learning: Time Dependent Probabilistic Modelling of Controversy Formation based on Social Network Analysis

Automated Product Categorization using Multi-class classification on Data from Amazon

Multi Sensor Fusion for Simultaneous Localization and Mapping on Autonomous Vehicles

Identification of Fake Reviews using Network Analysis and Modeling for E-commerce websites

Approaches for Modeling Data in Multiple Modalities using representation-learning

Predictive, inferential, and mechanistic modeling of cellular-decision making

Reinforcement Learning for enhancing dependability of large distributed control systems: An approach based on advanced simulation structures

Dynamic Scheduling using predictive analytics of Multi Cloud Environments

Rule-based reasoning for knowledge authoring and categorization

Testing deep learning models for Biomedical Imaging: An intelligent image regeneration system

Analysis of the impact of Artificial Intelligence on Distributed Energy Technology using time series analysis

Using deep learning on visual data to predict subjective attributes

An analysis of Hierarchical image classification in CNNs

Using Machine Learning for predicting AQI values based on Satellite Images

Analysis of Landscape images for climate classification: A neural network based approach

Distracted Driver identification: An analysis of most appropriate feature classification and ML algorithms

Predicting Currency exchange rate for recognizing social arbitrage based on News Media

Using Machine Learning models for Credit Card Fraud Detection

Analysis of Economic Networks to Identify Industries: Using Network Characteristics for Node Labeling

Predicting Chaotic systems: An analysis for current Machine Learning Techniques

Using Machine Learning to Model Student Learning in Mobile Apps

Analysis of football match data to predict goals: ANN based approach

Framework for automating feature engineering for deep Q-learning on Markov decision processes: Using NLP for MDP Embeddings

Machine Learning model for risk of Breast Cancer Relapse based on Copy Number

Using DNA Microarray Data for identification of Leukemia Patients: A new classification approach

A comparison study of multinomial classification methods, SVM, Naive Bayes, Logistic Regression and Random Forests, to predict drug-drug interaction severity values from the adverse drug reactions in the FDA’s database

A framework for gradient boosting model predicting CVD risks using multiple EHRs

Social Mdeia Trolls identification using ML: Naive Bayes, Logistic Regression, Kernel SVM, Random Forest, and LSTM neural networks to identify political trolls across social media

A classification framework for Climate Change stance: Using labeled and unlabeled data from Twitter

Collision Avoidance for Urban Air Mobility Vehicles using Markov Decision Processes

Machine Learning on Biochemical Small Datasets: Strategies for Pursuing Predictive Analyses of Human Voltage Gated Sodium Ion Channel (hNaVs) Inhibitors

Optimization model for Antibiotic Treatment using Microculture Results dataset

How accurate is weather data for predicting solar power generation?A new feature engineering approach using National Solar Radiation Database (NSRDB)

Testing Random Network Distillation Theory & Reinforcement Learning for Transfer Learning

Learning With High-Level Attributes: An experiment with fine-grained classification on the Caltech Birds Dataset

Cardiovascular Health prediction using Adaptive Network-Based Fuzzy Inference System (ANFIS)

Biomedical Image Analysis and Reconstruction using Convolutional Neural Networks (CNN)

Using prediction algorithm on acceleration and gyroscopic data of digital pen for character classification: A framework for handwriting identification

Predictive analysis on work visa approval data from the US state department

Transfer Learning to fine-grained visual categorization (FGVC) for Tree Leaf Identification

Labeling Characters as Good or Evil using Sentiment Analysis approach in Cloud Enabled Machine Learning

Prediction of weight-loss based on calorie intake using MYFITNESSPAL DATASET

Price prediction model for the AirBnB offerings based on location

Long-Short Term Memory (LSTM) and Convolutional Neural Network (CNN) models on exchange traded fund close price data to predict future prices.

Machine Learning Model for Tennis Match prediction using prior outcomes and player characteristics

Deep Learning to Collaborative Filtering Model: A novel approach for predictor system

Supervised Learning on Cloud Scale Networks for predicting Link Failure and Localization

An experiment using Deep Neural Networks for tuning of an Aircraft Pitch PID controller

A framework for detecting fake reviews using Yelp Data

SVM classifier and a modified convolutional neural network (CNN) based on Google Inception V3 to diagnose skin images as benign or malignant

Predictive analysis on used car prices

A framework for Yelp Recommendation System using XGboost

A critical review of reinforcement learning algorithms: Defining the way forward

Learning Generative Models using Transformations

New Advances in Sparse Learning, Deep Networks, and Adversarial Learning: Theory and Applications

Prediction system for Diagnosing Schizophrenia: A framework for clinical decision support

Biomedical Entity Recognition

A review into Energy Demand Forecast systems: A novel framework using cloud based AI for real time prediction

Text Classification: A review and way forward

For basic understanding of Machine Learning, take this course for free at Coursera. The course is comprehensive, and one of the best MOOCs till date on any subject.

"Machine Learning" offered by Stanford

In order to get some expert insights into each component of machine learning, alongside some practical approaches, take the " Deep Learning Specialization " offered by Deeplearning.ai .

In order to start with some practical implementation from the get-go, google's offering of " Machine Learning with TensorFlow on GCP " is the best way to go. It will provide you hands-on step-by-step guides on implementing Machine Learning models without any cost or hassle.

On same line as that of the GCP specialization, a much easier and quick way to start is by using Microsoft Azure ML Studio, which provide you with already constructed models and algorithms to play with and implement. Its fun, its easy and its highly valuable: " Implementing Predictive Analytics " and " Predcitive Analytics for IoT ".

⭐⭐⭐⭐⭐ Rating: 4.9 - ‎23 votes

  • Computer Science and IT
  • Research Topics
  • Learn from Experts

Recent Posts

Best Thesis/Dissertation/Project Topics in Information Systems of 2020

Best RAP/Thesis/Dissertation topics in Applied Accounting of 2020

6 simple steps for finding the best thesis or dissertation topic

Thesis Topics on Machine Learning

Thesis concepts based on machine learning must potentially relate with our interests or passion, knowledge of our domain experts and the accessible resources. If you feel that is a lack of best thesis topics in machine learning contact us. We always hunt for new trending technologies and by referring reputed journals we come up with new thesis ideas. Our enthusiastic professionals come up non-stop with wonderful machine learning topics on all fields of ML.

Here, various possible machine learning based thesis concepts are discussed below:

  • Deep Learning Advancements:
  • Over natural language processing, we investigate transformer frameworks.
  • Our approach examines the capsule network’s ability.
  • We focus on developing new regularization methods or activation functions.
  • Explainability & Understandability:
  • To visualize deep neural network decisions, our work introduces mechanisms.
  • Various techniques assist us to develop more understandable ensemble frameworks such as random forests.
  • Fairness, Accountability, and Transparency:
  • We estimate and reduce bias in machine learning techniques.
  • Our project investigates the societal suggestions of an automatic decision-making process.
  • To check fairness among particular descriptions, we build techniques.
  • Adversarial Machine Learning:
  • We construct efficient frameworks that are powerful against adversarial assaults.
  • Our aim is to develop new adversarial assault approaches.
  • For the framework’s sensitivity, we interpret the significant reasons.
  • Neurosymbolic Reasoning:
  • In our work, we integrate deep learning with symbolic reasoning.
  • In neural frameworks, our project executes logic and organized skills.
  • Graph Neural Networks:
  • We investigate the graph neural network’s scalability.
  • GNN’s application assists us in various fields like molecular biology, social network evaluation, etc.
  • Our research considers non-static and emerging graph learning.
  • Domain Adaption & Transfer Learning:
  • For fresh and novel fields, we make use of methods to alter frameworks.
  • Various plans help us for robust skill sharing among tasks.
  • Healthcare:
  • Machine learning assists us to forecast disease outbreaks.
  • We offer personalized treatment suggestions.
  • By utilizing convolutional networks, we examine clinical imagery.
  • Optimization:
  • Our work investigates non-convex optimization landscapes.
  • We conduct a distributed and aligned training of deep networks.
  • Our research makes use of stochastic optimization methods.
  • Anomaly Identification:
  • In high-dimensional data, we identify anomalies using various methods.
  • This approach has several applications in fraud identification, industrial fault identification, and cybersecurity.
  • Few-Shot & Zero-Shot Learning:
  • To train frameworks on a small amount of data, our research considers these techniques.
  • We also utilize cross-modal learning methods.
  • Several techniques help us to share knowledge among unrelated tasks.
  • Multimodal Learning:
  • From different sources like image, audio, and text, we combine data.
  • For heterogeneous data, we utilize joint embeddings.
  • Resource-Efficient ML:
  • Our approach considers various methods for on-device machine learning (for instance: mobile devices).
  • We carry out various processes like framework compression, quantization and pruning.
  • Our project utilizes inference techniques and conducts energy-efficient training.
  • Time-Series Prediction:
  • Several machine learning applications are very helpful for us in financial market forecasting.
  • For clinical time-series data, we employ deep learning.
  • By utilizing attention mechanisms, we carry out sequence modeling.
  • Reinforcement Learning:
  • In complicated platforms, our work uses deep reinforcement learning.
  • We employ multi-agent reinforcement learning dynamics.
  • Our research investigates plans and significant inspiration in RL.

We conclude that, while determining a research concept, it is very advantageous to interpret the latest state of the domain, detect limitations and decide how our project can provide efficiency to the previous domain skills.

Our aim is to bring in novelty thesis and dissertation ideas and topics for scholars. We cover the newsworthy issues that are not filled up in the research area. Sparkle up your areas our interest by our thesis writing services and we make the whole writing process efficient.

Thesis Ideas on Machine Learning

PhD Research thesis Ideas in Machine Learning

Our titles draw public attention so get your PhD Research thesis Ideas in Machine Learning done by professionals who are expertise in this field for more than 18+ years. We have a n extensive team of proficient experts for Thesis writing, moreover you can track our work incase if any editing has to be done, we rectify it immediately. On Machine Learning we complete term paper by our well expertise writers as per university guidelines and gain highest ranking.

Multiple ideas are suggested by our researchers on trending ML ideas that you may find interesting.

  • Ensemble machine learning model for classification of handwritten digit recognition
  • Machine Learning and Deep Learning Techniques for Residential Load Forecasting: A Comparative Analysis
  • Development of Machine Learning-based Predictive Models for Air Quality Monitoring and Characterization
  • Machine Learning Based Classification of Ducted and Non-Ducted Propeller Type Quadcopter
  • Multi-Class Crevasse Detection Using Ground Penetrating Radar and Feature-Based Machine Learning
  • Multi-Class Electrogastrogram (EGG) Signal Classification Using Machine Learning Algorithms
  • Performance Analysis of Machine Learning-based Face Detection Algorithms in Face Image Transmission over AWGN and Fading Channels
  • Machine learning algorithms applied in automatic classification of social network users
  • Supervised Machine Learning Algorithms to Detect Instagram Fake Accounts
  • Machine Learning-Enabled Classification of Forearm sEMG Signals to Control Robotic Hands Prostheses
  • Employee Classification for Personalized Professional Training Using Machine Learning Techniques and SMOTE
  • Machine Learning based Prediction of Wire Bonding Profile in 3D stacked integrated microelectronic packaging
  • Quora Based Insincere Content Classification & Detection for Social Media using Machine Learning
  • Vertical Autoscaling of GPU Resources for Machine Learning in the Cloud
  • Active Machine Learning in Regression Problems
  • Efficiency Comparison of Machine Learning Algorithms for EEG Interpretation
  • Epileptic Seizure Prediction Using Machine Learning Techniques on Real-Time EEG Signals
  • A Machine Learning IDS for Known and Unknown Anomalies
  • Using Supervised Machine Learning to Automatically Build Relevance Judgments for a Test Collection
  • Short-term Wind Speed Forecasting using Machine Learning Algorithms

Why Work With Us ?

Senior research member, research experience, journal member, book publisher, research ethics, business ethics, valid references, explanations, paper publication, 9 big reasons to select us.

Our Editor-in-Chief has Website Ownership who control and deliver all aspects of PhD Direction to scholars and students and also keep the look to fully manage all our clients.

Our world-class certified experts have 18+years of experience in Research & Development programs (Industrial Research) who absolutely immersed as many scholars as possible in developing strong PhD research projects.

We associated with 200+reputed SCI and SCOPUS indexed journals (SJR ranking) for getting research work to be published in standard journals (Your first-choice journal).

PhDdirection.com is world’s largest book publishing platform that predominantly work subject-wise categories for scholars/students to assist their books writing and takes out into the University Library.

Our researchers provide required research ethics such as Confidentiality & Privacy, Novelty (valuable research), Plagiarism-Free, and Timely Delivery. Our customers have freedom to examine their current specific research activities.

Our organization take into consideration of customer satisfaction, online, offline support and professional works deliver since these are the actual inspiring business factors.

Solid works delivering by young qualified global research team. "References" is the key to evaluating works easier because we carefully assess scholars findings.

Detailed Videos, Readme files, Screenshots are provided for all research projects. We provide Teamviewer support and other online channels for project explanation.

Worthy journal publication is our main thing like IEEE, ACM, Springer, IET, Elsevier, etc. We substantially reduces scholars burden in publication side. We carry scholars from initial submission to final acceptance.

Related Pages

Our benefits, throughout reference, confidential agreement, research no way resale, plagiarism-free, publication guarantee, customize support, fair revisions, business professionalism, domains & tools, we generally use, wireless communication (4g lte, and 5g), ad hoc networks (vanet, manet, etc.), wireless sensor networks, software defined networks, network security, internet of things (mqtt, coap), internet of vehicles, cloud computing, fog computing, edge computing, mobile computing, mobile cloud computing, ubiquitous computing, digital image processing, medical image processing, pattern analysis and machine intelligence, geoscience and remote sensing, big data analytics, data mining, power electronics, web of things, digital forensics, natural language processing, automation systems, artificial intelligence, mininet 2.1.0, matlab (r2018b/r2019a), matlab and simulink, apache hadoop, apache spark mlib, apache mahout, apache flink, apache storm, apache cassandra, pig and hive, rapid miner, support 24/7, call us @ any time, +91 9444829042, [email protected].

Questions ?

Click here to chat with us

IMAGES

  1. thesis ideas for machine learning

    thesis ideas for machine learning

  2. Machine Learning Thesis Ideas

    thesis ideas for machine learning

  3. Top 10 Innovative Artificial Intelligence Thesis Ideas [Professional

    thesis ideas for machine learning

  4. thesis ideas for machine learning

    thesis ideas for machine learning

  5. 10 Machine Learning Project (Thesis) Topics for 2020

    thesis ideas for machine learning

  6. Top 15+ Interesting Machine Learning Master Thesis (Research Guidance)

    thesis ideas for machine learning

VIDEO

  1. The Academic Research Process

  2. Research Methodology part two

  3. Workshop on "Research Methodology & Project Writing"

  4. Research Methods

  5. Introduction to Research

  6. AI Predicts Software Bugs Before They Happen

COMMENTS

  1. Available Master's thesis topics in machine learning

    Education thesis topics Available Master's thesis topics in machine learning Here we list topics that are available. You may also be interested in our list of completed Master's theses. Learning and inference with large Bayesian networks Sum-product networks Bayesian Bayesian networks Large-scale (probabilistic) matrix factorization

  2. Top 10 Research and Thesis Topics for ML Projects in 2022

    Voice and Speech Recognition, Signal Processing, Message Embedding, Message Extraction from Voice Encoded, and more are the best research and thesis topics for ML projects. Sentiment Analysis Sentiment analysis is one of the best Machine Learning projects well-known to uncover emotions in the text.

  3. Exploring 250+ Machine Learning Research Topics

    Real-world Applications Machine learning research has brought about tangible changes in our daily lives. Voice assistants like Siri and Alexa, recommendation systems on streaming platforms, and personalized healthcare diagnostics are just a few examples of how this research impacts our world.

  4. PhD Dissertations

    The Machine Learning Department at Carnegie Mellon University is ranked as #1 in the world for AI and Machine Learning, we offer Undergraduate, Masters and PhD programs. ... Essays in Machine Learning for Decision Support in the Public Sector Dylan Fitzpatrick, 2020. Towards a Unified Framework for Learning and Reasoning Han Zhao, 2020.

  5. The Future of AI Research: 20 Thesis Ideas for Undergraduate ...

    This article provides a list of 20 potential thesis ideas for an undergraduate program in machine learning and deep learning in 2023. Each thesis idea includes an introduction, which presents a brief overview of the topic and the research objectives.

  6. 17 Compelling Machine Learning Ph.D. Dissertations

    2. Structured Tensors and the Geometry of Data This machine learning dissertation analyzes data to build a quantitative understanding of the world. Linear algebra is the foundation of algorithms, dating back one hundred years, for extracting structure from data.

  7. Can anyone suggest me some good topics to do a thesis on machine

    I am searching for some research topics on machine learning, something that is suitable for an undergraduate student. I would be glad if anyone can provide me with trending topics. Thanks in advance.

  8. 10 Compelling Machine Learning Ph.D. Dissertations for 2020

    1. Bayesian Modeling and Variable Selection for Complex Data As we routinely encounter high-throughput data sets in complex biological and environmental research, developing novel models and methods for variable selection has received widespread attention.

  9. Thesis Topics

    Thesis Topics. This list includes topics for potential bachelor or master theses, guided research, projects, seminars, and other activities. Search with Ctrl+F for desired keywords, e.g. 'machine learning' or others. PLEASE NOTE: If you are interested in any of these topics, click the respective supervisor link to send a message with a ...

  10. 8 Best Topics for Research and Thesis in Artificial Intelligence

    1. Machine Learning Machine Learning involves the use of Artificial Intelligence to enable machines to learn a task from experience without programming them specifically about that task. (In short, Machines learn automatically without human hand holding!!!)

  11. PDF Undergraduate Fundamentals of Machine Learning

    Chapter 1 Introduction to Machine Learning 1.1 What is Machine Learning? There is a great deal of misunderstanding about what machine learning is, fueled by recent success and at times sensationalist media coverage.

  12. How to write a great data science thesis

    Glancing through past dissertations helped me understand how a typical machine learning research paper is structured and led to numerous ideas about interesting statistics and visualizations that I could include in my thesis. Below, I've compiled a list of great sources and databases containing previous theses.

  13. Open Theses

    Open Topics We offer multiple Bachelor/Master theses, Guided Research projects and IDPs in the area of data mining/machine learning. A non-exhaustive list of open topics is listed below.. If you are interested in a thesis or a guided research project, please send your CV and transcript of records to Prof. Stephan Günnemann via email and we will arrange a meeting to talk about the potential ...

  14. Latest Thesis Topics in Machine Learning for Research Scholars

    Below is the list of the latest thesis topics in Machine learning for research scholars: The classification technique for the face spoof detection in artificial neural networks using concepts of machine learning. The iris detection and reorganization system using classification and glcm algorithm in machine learning.

  15. 10 Machine Learning Project (Thesis) Topics for 2020

    Machine Learning Model for Classification and Detection of Breast Cancer (Classification) The data is provided by the Oncology department and details instances and related attributes which are nine in all. 2. Intelligent Internet Ads Generation (Classification) This is one of the most interesting topics for me.

  16. Thesis Topics for Machine Learning

    Research methodology - convey the proposed concepts Results and Discussion - discuss the results of the proposed works with previous works Conclusion and future work - present the results of the proposed work The introduction is the very first part of your thesis.

  17. PDF Master Thesis Using Machine Learning Methods for Evaluating the ...

    Master Thesis Using Machine Learning Methods for Evaluating the Quality of Technical Documents Abstract In the context of an increasingly networked world, the availability of high quality transla- tions is critical for success in the context of the growing international competition.

  18. What is a good topic for an undergraduate thesis in Machine Learning

    But I would say that their are shitloads of unexplored opportunities for machine learning in image segmentation of biologically useful images. For example, perhaps take a walk through a park, take pictures of all of the plants of one species, and see if you can use machine learning that can figure out things like degree of branching, age, pest ...

  19. Information Systems and Machine Learning Lab, University of Hildesheim

    The notion of attention in machine learning approaches proved successful in domain adaptation, time series forecasting, and image classification. However, all attention techniques are designed on an instance level. In this thesis, you will investigate the a new form of attention, by means of dataset meta-feature similarity.

  20. 25 Machine Learning Projects for All Levels

    1. Predict Taxi Fares with Random Forests In the Predict Taxi Fares project, you will be predicting the location and time to earn the biggest fare using the New York taxi dataset. You use tidyverse for data processing and visualization. To predict location and time, you will experiment with a tree base model such as Decision Tree and Random Forest.

  21. Machine Learning Ideas for Thesis

    Machine Learning Topics for Thesis Aspiring machine learning writers in phdtopic.com will, carry on complete literature survey and find research gap by sharing novel topics on your area. More than 8000+scholars have successfully completed their research work by working with us you can get to know the quality of four work by working with us.

  22. Top 60 Thesis/Dissertation Topics in Machine Learning and Artificial

    Top 60 Thesis/Dissertation Topics in Machine Learning and Artificial Intelligence of 2020 Selecting a focus area and topic for conducting your research and writing thesis/dissertation can be a problematic process-given constant transformation of academic landscape.

  23. PhD Research Thesis Topics on Machine Learning

    PhD Research thesis Ideas in Machine Learning. Our titles draw public attention so get your PhD Research thesis Ideas in Machine Learning done by professionals who are expertise in this field for more than 18+ years. We have a n extensive team of proficient experts for Thesis writing, moreover you can track our work incase if any editing has to ...