DS-2018-08: Quantum Algorithms and Learning Theory

DS-2018-08: Arunachalam, Srinivasan (2018) Quantum Algorithms and Learning Theory. Doctoral thesis, University of Amsterdam.

[thumbnail of Full Text] Text (Full Text)
DS-2018-08.text.pdf

Download (1MB)
[thumbnail of Samenvatting] Text (Samenvatting)
DS-2018-08.samenvatting.txt

Download (5kB)

Abstract

In this thesis, we present results in two directions of research. In the first part we study query and gate complexity of quantum algorithms for certain problems and in the second part we study sample and query complexity of quantum machine learning algorithms.

Part I: Quantum algorithms

In the first part we present three contributions to quantum algorithms, which we briefly summarize below.

Chapter 3. We look at the following basic search problem: suppose there is an unstructured database consisting of N elements and one of the elements is ``marked". Our goal is to find the marked element. To
solve this problem we are allowed to make “queries” which tell us if
an element is marked or not. Ideally we would like to find the marked element making as few queries as possible. Classically, in the worst case one would need to make essentially N queries to find the marked element.

Grover in 1996 constructed a quantum algorithm that solves this problem using O(sqrt{N}) quantum queries and O(sqrt{N}log N) other elementary gates. It is known that the number of quantum queries necessary to solve the search problem is Omega(sqrt{N}), so Grover's algorithm cannot be improved in terms of queries. In this chapter we describe a new quantum algorithm to solve the search problem, whose gate complexity is essentially O(sqrt{N}), while preserving the query complexity of Grover's algorithm.

Chapter 4. The flip-side of obtaining new quantum algorithms is showing query lower bounds, i.e., showing that every quantum algorithm needs to make at least a certain number of queries in order to solve a problem. In this direction there are two famous techniques to give query lower bounds, the polynomial method and the adversary method. The adversary method is known to characterize quantum query complexity, i.e., one can obtain upper bounds on quantum query complexity using the adversary method.

A natural question is whether the polynomial method admits such a converse as well. In this chapter we give a positive answer to this question by introducing a new degree-measure called the completely bounded approximate degree (denoted cbdeg) of a Boolean function. We show that for a Boolean function f, this cbdeg(f) equals the quantum query complexity of f. Our succinct characterization of quantum algorithms in terms of polynomials not only refines the polynomial method, but it also gives a new technique for showing upper and lower bounds on quantum query complexity.

Chapter 5. Optimization is an important task that touches on virtually every
area of science. For f:R^d -> R, consider the following optimization problem: OPT=min {f(x):x in R^d}. One generic technique used often to compute OPT is the gradient-descent algorithm. This begins with an arbitrary x in R^d, computes the gradient of d at x (denoted nabla f(x)) and moves to a point x' in the direction of -nabla f(x). This process is repeated a few times before the algorithm hopefully reaches a good approximation of OPT. Given the simplicity and generality of the algorithm, gradient-based methods are ubiquitous in machine learning algorithms.

An integral part of the gradient-based algorithm is the gradient computation step. Can the gradient computation step be improved using quantum techniques? In this chapter we develop a quantum algorithm that calculates the gradient of f:R^d -> R quadratically faster than classical methods. To be precise, we show that in order to obtain a eps-coordinate-wise approximation of the d-dimensional gradient vector nabla f at a given point x, it suffices to make O(sqrt{d}/eps) queries to the oracle encoding f. Using our quantum gradient calculation algorithm coupled with other quantum subroutines, we provide a quadratic quantum improvement to the complexity of almost all gradient-based optimization algorithms.

Part II: Quantum learning theory

In the second part we present two contributions to quantum learning theory, which we briefly summarize below.

Chapter 6. In the last decade, with the explosion of data and information, machine learning has gained prominence. Alongside the boom in classical machine learning, the last few years have seen an increase in the interest in quantum machine learning, an interdisciplinary area that uses the powers of quantum physics to improve classical machine learning algorithms.

In this chapter, we survey the theoretical side of quantum machine learning: quantum learning theory. We describe the main results known for three models of learning, using classical as well as quantum data: exact learning from membership queries, the probably approximately correct (PAC) learning model and the agnostic learning model. Apart from information-theoretic results, we also survey results on the time complexity of learning from membership queries and learning in the PAC and agnostic models.

Chapter 7. Leslie Valiant's PAC model of learning gives a complexity-theoretic definition of what it means for a concept class C (i.e., a collection of Boolean functions) to be (efficiently) learnable. In the PAC model, our goal is to approximately learn an unknown Boolean function from C given random examples for that function. It is well-known that the number of random examples necessary and sufficient to PAC-learn C is given by a combinatorial parameter, the VC dimension of C.

In this chapter we ask if a quantum learner can PAC-learn C given fewer quantum examples. We give a negative answer by showing that the number of quantum examples necessary and sufficient to PAC-learn C is also given by the VC dimension of C. We consider more realistic and flexible versions of PAC learning, i.e., agnostic learning and learning under random classification noise. In both these learning models, we show that quantum examples are not more powerful than classical examples.

Item Type: Thesis (Doctoral)
Report Nr: DS-2018-08
Series Name: ILLC Dissertation (DS) Series
Year: 2018
Subjects: Computation
Depositing User: Dr Marco Vervoort
Date Deposited: 14 Jun 2022 15:17
Last Modified: 14 Jun 2022 15:17
URI: https://eprints.illc.uva.nl/id/eprint/2157

Actions (login required)

View Item View Item