Discovering Knowledge in Data: An Introduction to Data Mining (2nd Edition)


Free download. Book file PDF easily for everyone and every device. You can download and read online Discovering Knowledge in Data: An Introduction to Data Mining (2nd Edition) file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Discovering Knowledge in Data: An Introduction to Data Mining (2nd Edition) book. Happy reading Discovering Knowledge in Data: An Introduction to Data Mining (2nd Edition) Bookeveryone. Download file Free Book PDF Discovering Knowledge in Data: An Introduction to Data Mining (2nd Edition) at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Discovering Knowledge in Data: An Introduction to Data Mining (2nd Edition) Pocket Guide.
Upcoming Events

List and describe the five primitives for specifying a data mining task. It involves specifying the database and tables or data warehouse containing the relevant data, conditions for selecting the relevant data, the relevant attributes or dimensions for exploration, and instructions regarding the ordering or grouping of the data retrieved. As well, the user can be more specific and provide pattern templates that all discovered patterns must match. These templates, or metapatterns also called metarules or metaqueries , can be used to guide the discovery process.

Such knowledge can be used to guide the knowledge discovery process and evaluate the patterns that are found. Concept hierarchies and user beliefs regarding relationships in the data are forms of background knowledge. This allows the user to confine the number of uninteresting patterns returned by the process, as a data mining process may generate a large number of patterns.

Interestingness measures can be specified for such pattern characteristics as simplicity, certainty, utility and novelty. In order for data mining to be effective in conveying knowledge to users, data mining systems should be able to display the discovered patterns in multiple forms such as rules, tables, cross tabs cross-tabulations , pie or bar charts, decision trees, cubes or other visual representations. Describe why concept hierarchies are useful in data mining. Answer: Concept hierarchies define a sequence of mappings from a set of lower-level concepts to higher-level, more general concepts and can be represented as a set of nodes organized in a tree, in the form of a lattice, or as a partial order.

They are useful in data mining because they allow the discovery of knowledge at multiple levels of abstraction and provide the structure on which data can be generalized rolled-up or specialized drilled-down. Together, these operations allow users to view the data from different perspectives, gaining further insight into relationships hidden in the data. This will be more efficient than mining on a large, uncompressed data set. Outliers are often discarded as noise.

INTRODUCTION TO DATA MINING IN HINDI

For example, exceptions in credit card transactions can help us detect the fraudulent use of credit cards. Taking fraudulence detection as an example, propose two methods that can be used to detect outliers and discuss which one is more reliable. The outliers are those data points that do not fall into any cluster.

Among the various kinds of clustering methods, density-based clustering may be the most effective. Clustering is detailed in Chapter 8. If the predicted value for a data point differs greatly from the given value, then the given value may be consider an outlier. Outlier detection based on clustering techniques may be more reliable. Because clustering is unsupervised, we do not need to make any assumptions regarding the data distribution e.

In contrast, regression prediction methods require us to make some assumptions of the data distribution, which may be inaccurate due to insufficient data. Recent applications pay special attention to spatiotemporal data streams. A spatiotemporal data stream contains spatial information that changes over time, and is in the form of stream data, i.

Answer: a Present three application examples of spatiotemporal data streams. Sequences of sensor images of a geographical region along time. The climate images from satellites. Data that describe the evolution of natural phenomena, such as forest coverage, forest fire, and so on. The knowledge that can be mined from spatiotemporal data streams really depends on the application. However, one unique type of knowledge about stream data is the patterns of spatial change with respect to the time.

For example, the changing of the traffic status of several highway junctions in a city, from the early morning to rush hours and back to off-peak hours, can show clearly where the traffic comes from and goes to and hence, would help the traffic officer plan effective alternative lanes in order to reduce the traffic load.

As another example, a sudden appearance of a point in the spectrum space image may indicate that a new planet is being formed. The changing of humidity, temperature, and pressure in climate data may reveal patterns of how a new typhoon is created. One major challenge is how to deal with the continuing large-scale data. Since the data keep flowing in and each snapshot of data is usually huge e. Some aggregation or compression techniques may have to be applied, and old raw data may have to be dropped.

Mining under such aggregated or lossy data is challenging. In addition, some patterns may occur with respect to a long time period, but it may not be possible to keep the data for such a long duration. Thus, these patterns may not be uncovered. The spatial data sensed may not be so accurate, so the algorithms must have high tolerance with respect to noise. Take mining space images as the application. We seek to observe whether any new planet is being created or any old planet is disappearing.

This is a change detection problem. Since the image frames keep coming, that is, f1 ,. The algorithm can be sketched as follows. If yes, report a planet appearance if an unmatched planet appears in the new frame or a planet disappearance if an unmatched planet appears in the old frame. In fact, matching between two frames may not be easy because the earth is rotating and thus, the sensed data may have slight variations. Some advanced techniques from image processing may be applied. The overall skeleton of the algorithm is simple.

Each new incoming image frame is only compared with the previous one, satisfying the time and resource constraint. The reported change would be useful since it is [old: almost impossible][new: infeasible] for astronomers to dig into every frame to detect whether a new planet has appeared or an old one has disappeared. Describe the differences between the following approaches for the integration of a data mining system with a database or data warehouse system: no coupling, loose coupling, semitight coupling, and tight coupling.

State which approach you think is the most popular, and why. Answer: The differences between the following architectures for the integration of a data mining system with a database or data warehouse system are as follows. Thus, this architecture represents a poor design choice. Thus, this architecture can take advantage of the flexibility, efficiency, and features such as indexing that the database and data warehousing systems may provide.

However, it is difficult for loose coupling to achieve high scalability and good performance with large data sets as many such systems are memory-based. Also, some frequently used intermedi- ate mining results can be precomputed and stored in the database or data warehouse system, thereby enhancing the performance of the data mining system. Thus, the data mining subsystem is treated as one functional component of an information system. This is a highly desirable architecture as it facilitates efficient implementations of data mining functions, high system performance, and an integrated information processing environment.

From the descriptions of the architectures provided above, it can be seen that tight coupling is the best alternative without respect to technical or implementation issues. However, as much of the technical in- frastructure needed in a tightly coupled system is still evolving, implementation of such a system is non- trivial. Therefore, the most popular architecture is currently semitight coupling as it provides a compromise between loose and tight coupling.

Describe three challenges to data mining regarding data mining methodology and user interaction issues. Answer: Challenges to data mining regarding data mining methodology and user interaction issues include the following: mining different kinds of knowledge in databases, interactive mining of knowledge at multiple levels of abstraction, incorporation of background knowledge, data mining query languages and ad hoc data mining, presentation and visualization of data mining results, handling noisy or incomplete data, and pattern evaluation.

Each of these tasks will use the same database in different ways and will require different data mining techniques. The user can then interactively view the data and discover patterns at multiple granularities and from different angles. This helps to focus and speed up a data mining process or judge the interestingness of discovered patterns. What are the major challenges of mining a huge amount of data such as billions of tuples in comparison with mining a small amount of data such as a few hundred tuple data set?

Answer: One challenge to data mining regarding performance issues is the efficiency and scalability of data mining algorithms. Data mining algorithms must be efficient and scalable in order to effectively extract information from large amounts of data in databases within predictable and acceptable running times. Another challenge is the parallel, distributed, and incremental processing of data mining algorithms.

The need for parallel and distributed data mining algorithms has been brought about by the huge size of many databases, the wide distribution of data, and the computational complexity of some data mining methods. Due to the high cost of some data mining processes, incremental data mining algorithms incorporate database updates without the need to mine the entire data again from scratch.

Major data mining challenges for two applications, that of data streams and bioinformatics, are addressed here. Data Stream Data stream analysis presents multiple challenges. First, data streams are continuously flowing in and out as well as changing dynamically. The data analysis system that will successfully take care of this type of data needs to be in real time and able to adapt to changing patterns that might emerge. Another major challenge is that the size of stream data can be huge or even infinite.

Because of this size, only a single or small number of scans are typically allowed. For further details on mining data stream, please consult Chapter 8. Bioinformatics The field of bioinformatics encompasses many other subfields like genomics, proteomics, molecular biology, and chemi-informatics. Each of these individual subfields has many research challenges. Some of the major challenges of data mining in the field of bioinformatics are outlined as follows.

Due to limitations of space, some of the terminology used here may not be explained. It has been estimated that genomic and proteomic data are doubling every 12 months. Most of these data are scattered around in unstructured and nonstandard forms in various different databases throughout the research community. Many of the biological experiments do not yield exact results and are prone to errors because it is very difficult to model exact biological conditions and processes. For example, the structure of a protein is not rigid and is dependent on its environment.

Hence, the structures determined by nuclear magnetic resonance NMR or crystallography experiments may not represent the exact structure of the protein. Since these experiments are performed in parallel by many institutions and scientists, they may each yield slightly different structures. The consolidation and validation of these conflicting data is a difficult challenge. These have become very popular in the past few years. However, due to concerns of Intellectual Property, a great deal of useful biological information is buried in proprietary databases within large pharmaceutical companies.

Most of the results are published, but they are seldom recorded in databases with the experiment details who, when, how, etc. Hence, a great deal of useful information is buried in published and unpublished literature. This has given rise to the need for the development of text mining systems. For example, many experimental results regarding protein interactions have been published. Mining this information may provide crucial insight into biological pathways and help predict potential interactions.

The extraction and development of domain-specific ontologies is also another related research challenge. The most time- consuming step is the lead discovery phase.

Discovering Knowledge in Data: An Introduction to Data Mining, 2nd Edition by Daniel T. Larose

In this step, large databases of compounds are needed to be mined to identify potential lead candidates that will suitably interact with the potential target. Currently, due to the lack of effective data mining systems, this step involves many trial-and-error iterations of wet lab or protein assay experiments. These experiments are highly time-consuming and costly.

Hence, one of the current challenges in bioinformatics includes the development of intelligent and computational data mining systems that can eliminate false positives and generate more true positives before the wet lab experimentation stage. The docking problem is an especially tricky problem, because it is governed by many physical interactions at the molecular level.

The main problem is the large solution space generated by the complex interactions at the molecular level. The molecular docking problem remains a fairly unsolved problem. Other related research areas include protein classification systems based on structure and function. Statistical and other methods are available. A large research community in data mining is focusing on adopting these pattern analysis and classification methods for mining microarray and gene expression data.

Chapter 2 Data Preprocessing 2. Data quality can be assessed in terms of accuracy, completeness, and consistency. Propose two other dimensions of data quality. Suppose that the values for a given set of data are grouped into intervals. The intervals and corresponding frequencies are as follows. Answer: P Using Equation 2. Give three additional commonly used statistical measures i. Answer: Data dispersion, also known as variance analysis, is the degree to which numeric data tend to spread and can be characterized by such statistical measures as mean deviation, measures of skewness, and the coefficient of variation.

This value will be greater for distributions with a larger spread. Note that all of the input values used to calculate these three statistical measures are algebraic measures. Thus, the value for the entire database can be efficiently calculated by partitioning the database, computing the values for each of the separate partitions, and then merging theses values into an algebraic equation that can be used to calculate the value for the entire database.

Index - Discovering Knowledge in Data: An Introduction to Data Mining, 2nd Edition [Book]

The measures of dispersion described here were obtained from: Statistical Methods in Research and Produc- tion, fourth ed. Davies and Peter L. Suppose that the data for analysis includes the attribute age. The age values for the data tuples are in increasing order 13, 15, 16, 16, 19, 20, 20, 21, 22, 22, 25, 25, 25, 25, 30, 33, 33, 35, 35, 35, 35, 36, 40, 45, 46, 52, What is the median? This data set has two values that occur with the same highest frequency and is, therefore, bimodal. The modes values occurring with the greatest frequency of the data are 25 and The first quartile corresponding to the 25th percentile of the data is: The third quartile corre- sponding to the 75th percentile of the data is: The five number summary of a distribution consists of the minimum value, first quartile, median value, third quartile, and maximum value.

It provides a good summary of the shape of the distribution and for this data is: 13, 20, 25, 35, Omitted here. Please refer to Figure 2. A quantile plot is a graphical method used to show the approximate percentage of values below or equal to the independent variable in a univariate distribution. Thus, it displays quantile information for all the data, where the values measured for the independent variable are plotted against their corresponding quantile. A quantile-quantile plot however, graphs the quantiles of one univariate distribution against the corre- sponding quantiles of another univariate distribution.

Both axes display the range of values measured for their corresponding distribution, and points are plotted that correspond to the quantile values of the two distributions. Points that lie above such a line indicate a correspondingly higher value for the distribution plotted on the y-axis than for the distribution plotted on the x-axis at the same quantile. The opposite effect is true for points lying below this line. In many applications, new data sets are incrementally added to the existing large data sets. Thus an important consideration for computing descriptive data summary is whether a measure can be computed efficiently in incremental manner.

Use count, standard deviation, and median as examples to show that a distributive or algebraic measure facilitates efficient incremental computation, whereas a holistic measure does not. This is a distributive measure and is easily updated for incremental additions. We simply need to calculate the squared sum of the new numbers, add that to the existing squared sum, update the count of the numbers, and plug that into the calculation to obtain the new standard deviation. All of this is done without looking at the whole data set and is thus easy to compute. When we add a new value or values, we have to sort the new set and then find the median based on that new sorted set.

This is much harder and thus makes the incremental addition of new values difficult. In real-world data, tuples with missing values for some attributes are a common occurrence. Describe various methods for handling this problem. Answer: The various methods for handling the problem of missing values in data tuples include: a Ignoring the tuple: This is usually done when the class label is missing assuming the mining task involves classification or description.

This method is not very effective unless the tuple contains several attributes with missing values. It is especially poor when the percentage of missing values per attribute varies considerably. Use this value to replace any missing values for income. For example, using the other customer attributes in the data set, we can construct a decision tree to predict the missing values for income. Using the data for age given in Exercise 2. Illustrate your steps. Comment on the effect of this technique for the given data. Answer: a Use smoothing by bin means to smooth the above data, using a bin depth of 3.

The following steps are required to smooth the above data using smoothing by bin means with a bin depth of 3. This step is not required here as the data are already sorted. Values that fall outside of the set of clusters may be considered outliers. Alternatively, a combination of computer and human inspection can be used where a predetermined data distribution is implemented to allow the computer to identify possible outliers.

These possible outliers can then be verified by human inspection with much less effort than would be required to verify the entire initial data set. Other methods that can be used for data smoothing include alternate forms of binning such as smooth- ing by bin medians or smoothing by bin boundaries. Alternatively, equal-width bins can be used to implement any of the forms of binning, where the interval range of values in each bin is constant.

Methods other than binning include using regression techniques to smooth the data by fitting it to a function such as through linear or multiple regression. Classification techniques can be used to imple- ment concept hierarchies that can smooth the data by rolling-up lower level concepts to higher-level concepts.

Kundrecensioner

Discuss issues to consider during data integration. Answer: Data integration involves combining data from multiple sources into a coherent data store. This is referred to as the entity identification problem.

Stay ahead with the world's most comprehensive technology and business learning platform.

Duplications at the tuple level may occur and thus need to be detected and resolved. Are these two variables positively or negatively correlated? For the variable age the mean is See Figure 2. The correlation coefficient is 0. The variables are positively correlated. What are the value ranges of the following normalization methods? Answer: a Use min-max normalization to transform the value 35 for age onto the range [0.

For readability, let A be the attribute age. Using Equation 2. Given the data, one may prefer decimal scaling for normalization because such a transformation would maintain the data distribution and be intuitive to interpret, while still allowing mining on specific age groups. As such values may be present in future data, this method is less appropriate. This type of transformation may not be as intuitive to the user in comparison with decimal scaling. Use a flow chart to summarize the following procedures for attribute subset selection: a stepwise forward selection b stepwise backward elimination c a combination of forward selection and backward elimination Answer: a Stepwise forward selection See Figure 2.

Suppose a group of 12 sales price records has been sorted as follows: 5, 10, 11, 13, 15, 35, 50, 55, 72, 92, , Partition them into three bins by each of the following methods. Propose several methods for median approximation. Analyze their respective complexity under different parameter settings and decide to what extent the real value can be approximated. Moreover, suggest a heuristic strategy to balance between accuracy and complexity and then apply it to all methods you have given. Answer: This question can be dealt with either theoretically or empirically, but doing some experiments to get the result is perhaps more interesting.

Given are some data sets sampled from different distributions, e. The former two distributions are symmetric, whereas the latter two are skewed. For example, if using Equation 2. Obviously, the error incurred will be decreased as k becomes larger; however, the time used in the whole procedure will also increase.


  • An Introduction to Data Mining;
  • Shop with confidence.
  • Organizational Psychology for Managers?
  • The United States and Persian Gulf Security: The Foundations of the War on Terror (Durham Middle East Monographs).
  • Bleak Expectations (Bleak Expectations, Book 1.5);
  • Community, Anarchy and Liberty.

The product of error made and time used are good optimality measures. In practice, this parameter value can be chosen to improve system performance. There are also other approaches for median approximation. The student may suggest a few, analyze the best trade-off point, and compare the results from the different approaches. A possible such approach is as follows: Hierarchically divide the whole data set into intervals: first, divide it into k regions and find the region in which the median resides; second, divide this particular region into k subregions, find the subregion in which the median resides;.

This iterates until the width of the subregion reaches a predefined threshold, and then the median approximation formula as above stated is applied. In this way, we can confine the median to a smaller area without globally partitioning all of data into shorter intervals, which would be expensive. The cost is proportional to the number of intervals. However, there is no commonly accepted subjective similarity measure.

viojuirecas.tk Using different similarity measures may deduce different results. Nonetheless, some apparently different similarity measures may be equivalent after some transformation. Suppose we have the following two-dimensional data set: A1 A2 x1 1. Use Euclidean distance on the transformed data to rank the data points.

Using these definitions we obtain the distance from each point to the query point. Based on the cosine similarity, the order is x1 , x3 , x4 , x2 , x5. After normalizing the data we have: x x1 x2 x3 x4 x5 0. Conceptually, it is the length of the vector. Based on the Euclidean distance of the normalized points, the order is x1 , x3 , x4 , x2 , x5 , which is the same as the cosine similarity order. ChiMerge [Ker92] is a supervised, bottom-up i. Perform data discretization for each of the four numerical attributes using the ChiMerge method.

You need to write a small program to do this to avoid clumsy numerical computation. Submit your simple analysis and your test results: split points, final intervals, and your documented source program. Answer: a Briefly describe how ChiMerge works. The final intervals are: Sepal length: [4. Sepal width: [2. Petal length: [1. Petal width: [0. The split points are: Sepal length: 4. Also, an alternative binning method could be implemented, such as smoothing by bin modes. The user can again specify more meaningful names for the concept hierarchy levels generated by reviewing the maximum and minimum values of the bins with respect to background knowledge about the data.

Robust data loading poses a challenge in database systems because the input data are often dirty. In many cases, an input record may have several missing values and some records could be contaminated i. Work out an automated data cleaning and loading algorithm so that the erroneous data will be marked and contaminated data will not be mistakenly inserted into the database during data loading. Answer: begin for each record r begin check r for missing values If possible, fill in missing values according to domain knowledge e. We can, for example, use the data in the database to construct a decision tree to induce missing values for a given attribute, and at the same time have human-entered rules on how to correct wrong data types.

State why, for the integration of multiple heterogeneous information sources, many companies in industry prefer the update-driven approach which constructs and uses data warehouses , rather than the query-driven approach which applies wrappers and integrators. Describe situations where the query-driven approach is preferable over the update-driven approach.

Answer: For decision-making queries and frequently-asked queries, the update-driven approach is more preferable. This is because expensive data integration and aggregate computation are done before query processing time. For the data collected in multiple heterogeneous databases to be used in decision-making processes, any semantic heterogeneity problems among multiple databases must be analyzed and solved so that the data can be integrated and summarized.

If the query-driven approach is employed, these queries will be translated into multiple often complex queries for each individual database. The translated queries will compete for resources with the activities at the local sites, thus degrading their performance. In addition, these queries will generate a complex answer set, which will require further filtering and integration. Thus, the query-driven approach is, in general, inefficient and expensive.

The update-driven approach employed in data warehousing is faster and more efficient since most of the queries needed could be done off-line. This is also the case if the queries rely on the current data because data warehouses do not contain the most current information. Briefly compare the following concepts. You may use an example to explain your point s. A starnet query model is a query model not a schema model , which consists of a set of radial lines emanating from a central point.

Each step away from the center represents the stepping down of a concept hierarchy of the dimension. The starnet query model, as suggested by its name, is used for querying and provides users with a global view of OLAP operations. Data transformation is the process of converting the data from heterogeneous sources to a unified data warehouse format or semantics.

Refresh is the function propagating the updates from the data sources to the warehouse. An enterprise warehouse provides corporate-wide data integration, usually from one or more operational systems or external information providers, and is cross-functional in scope, whereas the data mart is confined to specific selected subjects such as customer, item, and sales for a marketing data mart. An enterprise warehouse typically contains detailed data as well as summarized data, whereas the data in a data mart tend to be summarized.

The implementation cycle of an enterprise warehouse may take months or years, whereas that of a data mart is more likely to be measured in weeks. A virtual warehouse is a set of views over operational databases. For efficient query processing, only some of the possible summary views may be materialized. A virtual warehouse is easy to build but requires excess capacity on operational database servers. Suppose that a data warehouse consists of the three dimensions time, doctor, and patient, and the two measures count and charge, where charge is the fee that a doctor charges a patient for a visit.

Answer: a Enumerate three classes of schemas that are popularly used for modeling data warehouses. Three classes of schemas popularly used for modeling data warehouses are the star schema, the snowflake schema, and the fact constellations schema.


  • Marooned With A Millionaire.
  • Sir Anthony Eden and the Suez Crisis: Reluctant Gamble.
  • ISBN 13: 9780470908747?
  • History of Londons Prisons!

A star schema is shown in Figure 3. Suppose that a data warehouse for Big University consists of the following four dimensions: student, course, semester, and instructor, and two measures count and avg grade. When at the lowest conceptual level e. At higher conceptual levels, avg grade stores the average grade for the given combination.

Answer: a Draw a snowflake schema diagram for the data warehouse. A snowflake schema is shown in Figure 3. Suppose that a data warehouse consists of the four dimensions, date, spectator, location, and game, and the two measures, count and charge, where charge is the fare that a spectator pays when watching a game on a given date.

Spectators may be students, adults, or seniors, with each category having its own charge rate. Taking this cube as an example, briefly discuss advan- tages and problems of using a bitmap index structure. Answer: a Draw a star schema diagram for the data warehouse. Bitmap indexing is advantageous for low-cardinality domains. For example, in this cube, if dimension location is bitmap indexed, then comparison, join, and aggregation operations over location are then reduced to bit arithmetic, which substantially reduces the processing time. For dimensions with high cardinality, such as date in this example, the vector used to represent the bitmap index could be very long.

For example, a year collection of data could result in date records, meaning that every tuple in the fact table would require bits or approximately bytes to hold the bitmap index. Briefly describe the similarities and the differences of the two models, and then analyze their advantages and disadvantages with regard to one another. Give your opinion of which might be more empirically useful and state the reasons behind your answer. Answer: They are similar in the sense that they all have a fact table, as well as some dimensional tables. The major difference is that some dimension tables in the snowflake schema are normalized, thereby further splitting the data into additional tables.

The advantage of the star schema is its simplicity, which will enable efficiency, but it requires more space. For the snowflake schema, it reduces some redundancy by sharing common tables: the tables are easy to maintain and save some space. However, it is less efficient and the saving of space is negligible in comparison with the typical magnitude of the fact table. Therefore, empirically, the star schema is better simply because efficiency typically has higher priority over space as long as the space requirement is not too huge.

Another option is to use a snowflake schema to maintain dimensions, and then present users with the same data collapsed into a star [2]. References for the answer to this question include: [1] Oracle Tip: Understand the difference between star and snowflake schemas in OLAP. Snowflake Schemas. Design a data warehouse for a regional weather bureau.

The weather bureau has about 1, probes, which are scattered throughout various land and ocean locations in the region to collect basic weather data, including air pressure, temperature, and precipitation at each hour. All data are sent to the central station, which has collected such data for over 10 years. Your design should facilitate efficient querying and on-line analytical processing, and derive general weather patterns in multidimensional space. Answer: Since the weather bureau has about 1, probes scattered throughout various land and ocean locations, we need to construct a spatial data warehouse so that a user can view weather patterns on a map by month, by region, and by different combinations of temperature and precipitation, and can dynamically drill down or roll up along any dimension to explore desired patterns.

The star schema of this weather spatial data warehouse can be constructed as shown in Figure 3. To construct this spatial data warehouse, we may need to integrate spatial data from heterogeneous sources and systems. Fast and flexible on-line analytical processing in spatial data warehouses is an important factor. There are three types of dimensions in a spatial data cube: nonspatial dimensions, spatial-to- nonspatial dimensions, and spatial-to-spatial dimensions. We distinguish two types of measures in a spatial data cube: numerical measures and spatial measures. A nonspatial data cube contains only nonspatial dimensions and numerical measures.

If a spatial data cube contains spatial dimensions but no spatial measures, then its OLAP operations such as drilling or pivoting can be implemented in a manner similar to that of nonspatial data cubes. If a user needs to use spatial measures in a spatial data cube, we can selectively precompute some spatial measures in the spatial data cube. Which portion of the cube should be selected for materialization depends on the utility such as access frequency or access priority , sharability of merged regions, and the balanced overall cost of space and on-line computation.

A popular data warehouse implementation is to construct a multidimensional database, known as a data cube. Unfortunately, this may often generate a huge, yet very sparse multidimensional matrix. Present an example illustrating such a huge and sparse data cube. Answer: Present an example illustrating such a huge and sparse data cube. For the telephone company, it would be very expensive to keep detailed call records for every customer for longer than three months.

Therefore, it would be beneficial to remove that information from the database, keeping only the total number of calls made, the total minutes billed, and the amount billed, for example. The resulting computed data cube for the billing database would have large amounts of missing or removed data, resulting in a huge and sparse data cube.

Regarding the computation of measures in a data cube: a Enumerate three categories of measures, based on the kind of aggregate functions used in computing a data cube. Describe how to compute it if the cube is partitioned into many chunks. Answer: a Enumerate three categories of measures, based on the kind of aggregate functions used in computing a data cube.

The three categories of measures are distributive, algebraic, and holistic. The variance function is algebraic. If the cube is partitioned into many chunks, the variance can be computed as follows: Read in the chunks one by one, keeping track of the accumulated 1 number of tuples, 2 sum of xi 2 , and 3 sum of xi.

Use the formula as shown in the hint to obtain the variance. For each cuboid, use 10 units to register the top 10 sales found so far. Read the data in each cubiod once. If the sales amount in a tuple is greater than an existing one in the top list, insert the new sales amount from the new tuple into the list, and discard the smallest one in the list.

The computation of a higher level cuboid can be performed similarly by propagation of the top cells of its corresponding lower level cuboids.

Du kanske gillar. Permanent Record Edward Snowden Inbunden. Inbunden Engelska, Spara som favorit. Skickas inom vardagar. Laddas ned direkt. Due to the ever-increasing complexity and size of data sets and the wide range of applications in computer science, business, and health care, the process of discovering knowledge in data is more relevant than ever before. This book provides the tools needed to thrive in today s big data world. The author demonstrates how to leverage a company s existing databases to increase profits and market share, and carefully explains the most current data science methods and techniques.

The reader will learn data mining by doing data mining. By adding chapters on data modelling preparation, imputation of missing data, and multivariate statistical analysis, Discovering Knowledge in Data, Second Edition remains the eminent reference on data mining. Passar bra ihop.

Discovering Knowledge in Data: An Introduction to Data Mining (2nd Edition) Discovering Knowledge in Data: An Introduction to Data Mining (2nd Edition)
Discovering Knowledge in Data: An Introduction to Data Mining (2nd Edition) Discovering Knowledge in Data: An Introduction to Data Mining (2nd Edition)
Discovering Knowledge in Data: An Introduction to Data Mining (2nd Edition) Discovering Knowledge in Data: An Introduction to Data Mining (2nd Edition)
Discovering Knowledge in Data: An Introduction to Data Mining (2nd Edition) Discovering Knowledge in Data: An Introduction to Data Mining (2nd Edition)
Discovering Knowledge in Data: An Introduction to Data Mining (2nd Edition) Discovering Knowledge in Data: An Introduction to Data Mining (2nd Edition)
Discovering Knowledge in Data: An Introduction to Data Mining (2nd Edition) Discovering Knowledge in Data: An Introduction to Data Mining (2nd Edition)
Discovering Knowledge in Data: An Introduction to Data Mining (2nd Edition) Discovering Knowledge in Data: An Introduction to Data Mining (2nd Edition)

Related Discovering Knowledge in Data: An Introduction to Data Mining (2nd Edition)



Copyright 2019 - All Right Reserved