276°
Posted 20 hours ago

Mining of Massive Datasets

£9.9£99Clearance
ZTS2023's avatar
Shared by
ZTS2023
Joined in 2023
82
63

About this deal

CS341 Project in Mining Massive Data Sets is an advanced project based course. Students work on data mining and machine learning algorithms for analyzing very large amounts of data. Both interesting big datasets as well as computational infrastructure (large MapReduce cluster) are provided by course staff. Our goal in this chapter is to offer methods for discovering clusters in data. We are particularly interested in situations where the data is very large, and/or where the space either is high-dimensional, or the space is not Euclidean at all. We shall therefore discuss several algorithms that assume the data does not fit in main memory. However, we begin with the basics: the two general approaches to clustering and the methods for dealing with clusters in a non-Euclidean space. Knowledge of basic computer science principles and skills, at a level sufficient to write a reasonably non-trivial computer program (e.g., CS107 or CS145 or equivalent are recommended). Together with each chapter there is aslo a set of lecture slides that we use for teaching Stanford CS246: Mining Familiarity with basic linear algebra (e.g., any of Math 51, Math 103, Math 113, CS 205, or EE 263 would be much more than necessary).

CS246: Mining Massive Datasets is graduate level course that discusses data mining and machine learning algorithms for analyzing very large amounts of data. The emphasis is on Map Reduce as a tool for creating parallel algorithms that can process very large amounts of data. To support deeper explorations, most of the chapters are supplemented with further reading references. This introduction is followed by the book's main topics, starting with a chapter on techniques for assessing the similarity of data items in large datasets. This covers the similarity and distance measures used in conventional applications, but with special emphasis on the techniques needed to render these measures applicable to large-scale data processing. This approach is nicely illustrated by the use of min-hash functions to approximate Jaccard similarity. The next chapter focuses on mining data streams, including sampling, Bloom filters, counting, and moment estimation.

The focus of the book is on data mining (on large datasets) as opposed to machine learning. The distinction may strike the reader as somewhat arbitrary, given the degree of interaction between these two fields, but the authors justify it in terms of a focus on algorithms that can be applied directly to data. Although these include what is known in machine learning circles as "unsupervised learning," the book draws most heavily on databases and information retrieval sources. The first two chapters cover the relevant concepts and tools from these main sources, along with preliminaries on statistical modeling and hash functions, the latter being pervasive throughout the book. The MapReduce programming model is naturally given a prominent place and is explained in great detail. We begin by reviewing the notions of distance measures and spaces. The two major approaches to clustering – hierarchical and point-assignment – are defined. We then turn to a discussion of the “curse of dimensionality,” which makes clustering in high-dimensional spaces difficult, but also, as we shall see, enables some simplifications if used correctly in a clustering algorithm.

The difference leads to a new class of algorithms for finding frequent itemsets. We begin with the A-Priori Algorithm, which works by eliminating most large sets as candidates by looking first at smaller sets and recognizing that a large set cannot be frequent unless all its subsets are. We then consider various improvements to the basic A-Priori idea, concentrating on very large data sets that stress the available main memory. The problem of finding frequent itemsets differs from the similarity search discussed in Chapter 3. Here we are interested in the absolute number of baskets that contain a particular set of items. In Chapter 3 we wanted items that have a large fraction of their baskets in common, even if the absolute number of baskets is small. CS224W: Social and Information Networks is graduate level course that covers recent research on the structure and analysis of such large social and information networks and on models and algorithms that abstract their basic properties. Class explores how to practically analyze large scale network data and how to reason about it through models for network structure and evolution.

Logistics

Although theoretical issues are discussed where relevant, the focus of the text is clearly on practical issues. Readers interested in a more rigorous treatment of the theoretical foundations for these techniques should look elsewhere. Fortunately, each chapter contains key references to guide the more formally minded reader. We turn in this chapter to one of the major families of techniques for characterizing data: the discovery of frequent itemsets. This problem is often viewed as the discovery of “association rules,” although the latter is a more complex characterization of data, whose discovery depends fundamentally on the discovery of frequent itemsets.

Asda Great Deal

Free UK shipping. 15 day free returns.
Community Updates
*So you can easily identify outgoing links on our site, we've marked them with an "*" symbol. Links on our site are monetised, but this never affects which deals get posted. Find more info in our FAQs and About Us page.
New Comment