Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    Call number: B060:170
    Keywords: Data mining ; Artificial intelligence
    Description / Table of Contents: Part I. Machine Learning Tools and Techniques: 1. What's iIt all about?; 2. Input: concepts, instances, and attributes; 3. Output: knowledge representation; 4. Algorithms: the basic methods; 5. Credibility: evaluating what's been learned -- Part II. Advanced Data Mining: 6. Implementations: real machine learning schemes; 7. Data transformation; 8. Ensemble learning; 9. Moving on: applications and beyond -- Part III. The Weka Data MiningWorkbench: 10. Introduction to Weka; 11. The explorer -- 12. The knowledge flow interface; 13. The experimenter; 14 The command-line interface; 15. Embedded machine learning; 16. Writing new learning schemes; 17. Tutorial exercises for the weka explorer
    Abstract: Provides information on the tools and techniques of data mining, covering such topics as data transformation, ensemble learning, and datasets, and presents instructions on the Weka machine learning software
    Pages: xxxiii, 629 p. : ill.
    Edition: 3rd ed.
    ISBN: 9780123748560
    Signatur Availability
    B060:170 departmental collection or stack – please contact the library
    BibTip Others were also interested in ...
  • 2
    Call number: QA76.7:124(4)
    Keywords: Artificial intelligence ; Data mining
    Pages: xxxii, 621 p. : ill.
    Edition: 4th ed.
    ISBN: 9780128042915
    Signatur Availability
    QA76.7:124(4) on loan
    BibTip Others were also interested in ...
  • 3
    Call number: H0900:50 ; H0900:63 ; H0900:64 ; 09-EDV:1495
    Keywords: Data mining ; Java (Computer program language)
    Pages: xxv, 369 p. : ill.
    ISBN: 1558605525
    Signatur Availability
    H0900:50 departmental collection or stack – please contact the library
    H0900:63 departmental collection or stack – please contact the library
    H0900:64 departmental collection or stack – please contact the library
    09-EDV:1495 departmental collection or stack – please contact the library
    BibTip Others were also interested in ...
  • 4
    ISSN: 0885-6125
    Keywords: naive Bayes ; regression ; model trees ; linear regression ; locally weighted regression
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract Despite its simplicity, the naive Bayes learning scheme performs well on most classification tasks, and is often significantly more accurate than more sophisticated methods. Although the probability estimates that it produces can be inaccurate, it often assigns maximum probability to the correct class. This suggests that its good performance might be restricted to situations where the output is categorical. It is therefore interesting to see how it performs in domains where the predicted value is numeric, because in this case, predictions are more sensitive to inaccurate probability estimates. This paper shows how to apply the naive Bayes methodology to numeric prediction (i.e., regression) tasks by modeling the probability distribution of the target value with kernel density estimators, and compares it to linear regression, locally weighted linear regression, and a method that produces “model trees”—decision trees with linear regression functions at the leaves. Although we exhibit an artificial dataset for which naive Bayes is the method of choice, on real-world datasets it is almost uniformly worse than locally weighted linear regression and model trees. The comparison with linear regression depends on the error measure: for one measure naive Bayes performs similarly, while for another it is worse. We also show that standard naive Bayes applied to regression problems by discretizing the target value performs similarly badly. We then present empirical evidence that isolates naive Bayes' independence assumption as the culprit for its poor performance in the regression setting. These results indicate that the simplistic statistical assumption that naive Bayes makes is indeed more restrictive for regression than for classification.
    Type of Medium: Electronic Resource
    Signatur Availability
    BibTip Others were also interested in ...
  • 5
    ISSN: 0885-6125
    Keywords: Model trees ; classification algorithms ; M5 ; C5.0 ; decision trees
    Source: Springer Online Journal Archives 1860-2000
    Topics: Computer Science
    Notes: Abstract Model trees, which are a type of decision tree with linear regression functions at the leaves, form the basis of a recent successful technique for predicting continuous numeric values. They can be applied to classification problems by employing a standard method of transforming a classification problem into a problem of function approximation. Surprisingly, using this simple transformation the model tree inducer M5′, based on Quinlan's M5, generates more accurate classifiers than the state-of-the-art decision tree learner C5.0, particularly when most of the attributes are numeric.
    Type of Medium: Electronic Resource
    Signatur Availability
    BibTip Others were also interested in ...