Ergebnisse für *

Es wurden 6 Ergebnisse gefunden.

Zeige Ergebnisse 1 bis 6 von 6.

Sortieren

  1. Web Corpus Construction
    Erschienen: [2013]; © 2013
    Verlag:  Morgan & Claypool Publishers, [San Rafael]

    The World Wide Web constitutes the largest existing source of texts written in a great variety of languages. A feasible and sound way of exploiting this data for linguistic research is to compile a static corpus for a given language. There are... mehr

    Universität Potsdam, Universitätsbibliothek
    uneingeschränkte Fernleihe, Kopie und Ausleihe

     

    The World Wide Web constitutes the largest existing source of texts written in a great variety of languages. A feasible and sound way of exploiting this data for linguistic research is to compile a static corpus for a given language. There are several advantages of this approach: (i) Working with such corpora obviates the problems encountered when using Internet search engines in quantitative linguistic research (such as non-transparent ranking algorithms). (ii) Creating a corpus from web data is virtually free. (iii) The size of corpora compiled from the WWW may exceed by several orders of magnitudes the size of language resources offered elsewhere. (iv) The data is locally available to the user, and it can be linguistically post-processed and queried with the tools preferred by her/him. This book addresses the main practical tasks in the creation of web corpora up to giga-token size. Among these tasks are the sampling process (i. e., web crawling) and the usual cleanups including boilerplate removal and removal of duplicated content. Linguistic processing and problems with linguistic processing coming from the different kinds of noise in web corpora are also covered. Finally, the authors show how web corpora can be evaluated and compared to other corpora (such as traditionally compiled corpora) 1. Web corpora -- 2. Data collection -- 2.1 Introduction -- 2.2 The structure of the web -- 2.2.1 General properties -- 2.2.2 Accessibility and stability of web pages -- 2.2.3 What's in a (national) top level domain? -- 2.2.4 Problematic segments of the web -- 2.3 Crawling basics -- 2.3.1 Introduction -- 2.3.2 Corpus construction from search engine results -- 2.3.3 Crawlers and crawler performance -- 2.3.4 Configuration details and politeness -- 2.3.5 Seed URL generation -- 2.4 More on crawling strategies -- 2.4.1 Introduction -- 2.4.2 Biases and the pagerank -- 2.4.3 Focused crawling -- 3. Post-processing -- 3.1 Introduction -- 3.2 Basic cleanups -- 3.2.1 HTML stripping -- 3.2.2 Character references and entities -- 3.2.3 Character sets and conversion -- 3.2.4 Further normalization -- 3.3 Boilerplate removal -- 3.3.1 Introduction to boilerplate -- 3.3.2 Feature extraction -- 3.3.3 Choice of the machine learning method -- 3.4 Language identification -- 3.5 Duplicate detection -- 3.5.1 Types of duplication -- 3.5.2 Perfect duplicates and hashing -- 3.5.3 Near duplicates, Jaccard coefficients, and shingling -- 4. Linguistic processing -- 4.1 Introduction -- 4.2 Basics of tokenization, part-of-speech tagging, and lemmatization -- 4.2.1 Tokenization -- 4.2.2 Part-of-speech tagging -- 4.2.3 Lemmatization -- 4.3 Linguistic post-processing of noisy data -- 4.3.1 Introduction -- 4.3.2 Treatment of noisy data -- 4.4 Tokenizing web texts -- 4.4.1 Example: missing whitespace -- 4.4.2 Example: emoticons -- 4.5 POS tagging and lemmatization of web texts -- 4.5.1 Tracing back errors in POS tagging -- 4.6 Orthographic normalization -- 4.7 Software for linguistic post-processing -- 5. Corpus evaluation and comparison -- 5.1 Introduction -- 5.2 Rough quality check -- 5.2.1 Word and sentence lengths -- 5.2.2 Duplication -- 5.3 Measuring corpus similarity -- 5.3.1 Inspecting frequency lists -- 5.3.2 Hypothesis testing with -- 5.3.3 Hypothesis testing with Spearman's rank correlation -- 5.3.4 Using test statistics without hypothesis testing -- 5.4 Comparing keywords -- 5.4.1 Keyword extraction with x2 -- 5.4.2 Keyword extraction using the ratio of relative frequencies -- 5.4.3 Variants and refinements -- 5.5 Extrinsic evaluation -- 5.6 Corpus composition -- 5.6.1 Estimating corpus composition -- 5.6.2 Measuring corpus composition -- 5.6.3 Interpreting corpus composition -- 5.7 Summary -- Bibliography -- Authors' biographies

     

    Export in Literaturverwaltung   RIS-Format
      BibTeX-Format
    Hinweise zum Inhalt
    Quelle: Verbundkataloge
    Beteiligt: Bildhauer, Felix (VerfasserIn)
    Sprache: Englisch
    Medientyp: Ebook
    Format: Online
    ISBN: 9781608459841
    Weitere Identifier:
    RVK Klassifikation: ES 900
    Schriftenreihe: Synthesis Lectures on Human Language Technologies ; #22
    Schlagworte: Web search engines; Computational linguistics; Corpora (Linguistics)
    Umfang: 1 Online-Ressource (222 Seiten), Illustrationen
    Bemerkung(en):

    Description based upon print version of record

    Also available in print.

    :

    :

    :

    :

    :

    :

  2. Web Corpus Construction
    Erschienen: [2013]; © 2013
    Verlag:  Morgan & Claypool Publishers, [San Rafael]

    The World Wide Web constitutes the largest existing source of texts written in a great variety of languages. A feasible and sound way of exploiting this data for linguistic research is to compile a static corpus for a given language. There are... mehr

    Universität Potsdam, Universitätsbibliothek
    keine Fernleihe

     

    The World Wide Web constitutes the largest existing source of texts written in a great variety of languages. A feasible and sound way of exploiting this data for linguistic research is to compile a static corpus for a given language. There are several advantages of this approach: (i) Working with such corpora obviates the problems encountered when using Internet search engines in quantitative linguistic research (such as non-transparent ranking algorithms). (ii) Creating a corpus from web data is virtually free. (iii) The size of corpora compiled from the WWW may exceed by several orders of magnitudes the size of language resources offered elsewhere. (iv) The data is locally available to the user, and it can be linguistically post-processed and queried with the tools preferred by her/him. This book addresses the main practical tasks in the creation of web corpora up to giga-token size. Among these tasks are the sampling process (i. e., web crawling) and the usual cleanups including boilerplate removal and removal of duplicated content. Linguistic processing and problems with linguistic processing coming from the different kinds of noise in web corpora are also covered. Finally, the authors show how web corpora can be evaluated and compared to other corpora (such as traditionally compiled corpora) 1. Web corpora -- 2. Data collection -- 2.1 Introduction -- 2.2 The structure of the web -- 2.2.1 General properties -- 2.2.2 Accessibility and stability of web pages -- 2.2.3 What's in a (national) top level domain? -- 2.2.4 Problematic segments of the web -- 2.3 Crawling basics -- 2.3.1 Introduction -- 2.3.2 Corpus construction from search engine results -- 2.3.3 Crawlers and crawler performance -- 2.3.4 Configuration details and politeness -- 2.3.5 Seed URL generation -- 2.4 More on crawling strategies -- 2.4.1 Introduction -- 2.4.2 Biases and the pagerank -- 2.4.3 Focused crawling -- 3. Post-processing -- 3.1 Introduction -- 3.2 Basic cleanups -- 3.2.1 HTML stripping -- 3.2.2 Character references and entities -- 3.2.3 Character sets and conversion -- 3.2.4 Further normalization -- 3.3 Boilerplate removal -- 3.3.1 Introduction to boilerplate -- 3.3.2 Feature extraction -- 3.3.3 Choice of the machine learning method -- 3.4 Language identification -- 3.5 Duplicate detection -- 3.5.1 Types of duplication -- 3.5.2 Perfect duplicates and hashing -- 3.5.3 Near duplicates, Jaccard coefficients, and shingling -- 4. Linguistic processing -- 4.1 Introduction -- 4.2 Basics of tokenization, part-of-speech tagging, and lemmatization -- 4.2.1 Tokenization -- 4.2.2 Part-of-speech tagging -- 4.2.3 Lemmatization -- 4.3 Linguistic post-processing of noisy data -- 4.3.1 Introduction -- 4.3.2 Treatment of noisy data -- 4.4 Tokenizing web texts -- 4.4.1 Example: missing whitespace -- 4.4.2 Example: emoticons -- 4.5 POS tagging and lemmatization of web texts -- 4.5.1 Tracing back errors in POS tagging -- 4.6 Orthographic normalization -- 4.7 Software for linguistic post-processing -- 5. Corpus evaluation and comparison -- 5.1 Introduction -- 5.2 Rough quality check -- 5.2.1 Word and sentence lengths -- 5.2.2 Duplication -- 5.3 Measuring corpus similarity -- 5.3.1 Inspecting frequency lists -- 5.3.2 Hypothesis testing with -- 5.3.3 Hypothesis testing with Spearman's rank correlation -- 5.3.4 Using test statistics without hypothesis testing -- 5.4 Comparing keywords -- 5.4.1 Keyword extraction with x2 -- 5.4.2 Keyword extraction using the ratio of relative frequencies -- 5.4.3 Variants and refinements -- 5.5 Extrinsic evaluation -- 5.6 Corpus composition -- 5.6.1 Estimating corpus composition -- 5.6.2 Measuring corpus composition -- 5.6.3 Interpreting corpus composition -- 5.7 Summary -- Bibliography -- Authors' biographies

     

    Export in Literaturverwaltung   RIS-Format
      BibTeX-Format
    Hinweise zum Inhalt
    Quelle: Verbundkataloge
    Beteiligt: Bildhauer, Felix (VerfasserIn)
    Sprache: Englisch
    Medientyp: Ebook
    Format: Online
    ISBN: 9781608459841
    Weitere Identifier:
    RVK Klassifikation: ES 900
    Schriftenreihe: Synthesis Lectures on Human Language Technologies ; #22
    Schlagworte: Web search engines; Computational linguistics; Corpora (Linguistics)
    Umfang: 1 Online-Ressource (222 Seiten), Illustrationen
    Bemerkung(en):

    Description based upon print version of record

    Also available in print.

    :

    :

    :

    :

    :

    :

  3. Lucene in action
    Erschienen: c2010
    Verlag:  Manning Pub., Stamford, Conn.

    "When Lucene first hit the scene five years ago, it was nothing short of amazing. By using this open-source, highly scalable, super-fast search engine, developers could integrate search into applications quickly and efficiently. A lot has changed... mehr

    Zugang:
    Sächsische Landesbibliothek - Staats- und Universitätsbibliothek Dresden
    keine Fernleihe
    Hochschule Furtwangen University. Informatik, Technik, Wirtschaft, Medien. Campus Furtwangen, Bibliothek
    eBook Safari
    keine Fernleihe
    Max-Planck-Institut für ethnologische Forschung, Bibliothek
    keine Fernleihe
    Zentrum für Wissensmanagement, Bibliothek Hamm
    eBook Safari
    keine Fernleihe
    Universitätsbibliothek Heidelberg
    keine Ausleihe von Bänden, nur Papierkopien werden versandt
    Medizinische Fakultät Mannheim der Universität Heidelberg, Bibliothek
    keine Ausleihe von Bänden, nur Papierkopien werden versandt
    Zentrum für Wissensmanagement, Bibliothek Lippstadt
    eBook Safari
    keine Fernleihe
    Otto-von-Guericke-Universität, Universitätsbibliothek
    eBook OReilly
    keine Fernleihe
    Duale Hochschule Baden-Württemberg Mannheim, Bibliothek
    O'Reilly
    keine Fernleihe
    Leibniz-Institut für Deutsche Sprache (IDS), Bibliothek
    keine Fernleihe
    Hochschule Offenburg, University of Applied Sciences, Bibliothek Campus Offenburg
    E-Book O'Reilly Online Learning
    keine Fernleihe
    Universitätsbibliothek Osnabrück
    keine Fernleihe
    Hochschule für Technik Stuttgart, Bibliothek
    oReilly eBook
    keine Fernleihe
    Universitätsbibliothek der Eberhard Karls Universität
    keine Fernleihe
    Technische Hochschule Ulm, Bibliothek
    eBook O'Reilly
    keine Fernleihe
    Universität Ulm, Kommunikations- und Informationszentrum, Bibliotheksservices
    keine Fernleihe

     

    "When Lucene first hit the scene five years ago, it was nothing short of amazing. By using this open-source, highly scalable, super-fast search engine, developers could integrate search into applications quickly and efficiently. A lot has changed since then-search has grown from a 'nice-to-have' feature into an indispensable part of most enterprise applications. Lucene now powers search in diverse companies including Akamai, Netflix, LinkedIn, Technorati, HotJobs, Epiphany, FedEx, Mayo Clinic, MIT, New Scientist Magazine, and many others. Some things remain the same, though. Lucene still delivers high-performance search features in a disarmingly easy-to-use API. Due to its vibrant and diverse open-source community of developers and users, Lucene is relentlessly improving, with evolutions to APIs, significant new features such as payloads, and a huge increase (as much as 8x) in indexing speed with Lucene 2.3. And with clear writing, reusable examples, and unmatched advice on best practices, Lucene in Action, Second Edition is still the definitive guide to developing with Lucene"--Resource description page.

     

    Export in Literaturverwaltung   RIS-Format
      BibTeX-Format
    Quelle: Leibniz-Institut für Deutsche Sprache, Bibliothek
    Beteiligt: Hatcher, Erik (MitwirkendeR); Gospodnetić, Otis (MitwirkendeR)
    Sprache: Englisch
    Medientyp: Buch (Monographie)
    Format: Online
    Auflage/Ausgabe: 2nd ed.
    Schlagworte: Lucene (Electronic resource); Web search engines; Internet searching; Java (Computer program language); Computer networks ; Handbooks, manuals, etc; Electronic books ; local
    Umfang: 1 online resource (xxxviii, 448 p.), ill.
    Bemerkung(en):

    Description based on print version record. - Previous ed.: Lucene in action / Otis Gospodnetić, Erik Hatcher. 2005. - Includes bibliographical references (p. 465-468) and index

  4. Learning Apache Solr High Performance
    Erschienen: 2014
    Verlag:  Packt Publishing, Birmingham

    Cover; Copyright; Credits; About the Author; About the Reviewers; www.PacktPub.com; Table of Contents; Preface; Chapter 1: Installing Solr; Prerequisites; Installing components; Summary; Chapter 2: Boost Your Search; Scoring; Boosting query-time and... mehr

    Leibniz-Institut für Deutsche Sprache (IDS), Bibliothek
    keine Fernleihe

     

    Cover; Copyright; Credits; About the Author; About the Reviewers; www.PacktPub.com; Table of Contents; Preface; Chapter 1: Installing Solr; Prerequisites; Installing components; Summary; Chapter 2: Boost Your Search; Scoring; Boosting query-time and index-time; Index-time boosting; Query-time boosting; Troubleshoot queries and scores; The dismax query parser; Lucene DisjunctionMaxQuery; Autophrase boosting; Configuring autophrase boosting; Configuring the phrase slop; Boosting a partial phrase; Boost queries; Boost functions; Boost addition and multiplication; Function queries Field referencesFunction references; Mathematical operations; The ord() and rord() functions; Other functions; Boosting the function query; Logarithm; Reciprocal; Linear; Inverse reciprocal; Summary; Chapter 3: Performance Optimization; Solr performance factors; Solr caching; Document caching; Query result caching; Filter caching; Result pages caching; Using SolrCloud; Creating a SolrCloud cluster; Multiple collections within a cluster; Managing a SolrCloud cluster; Distributed indexing and searching; Stopping automatic document distribution; Near real-time search; Summary Chapter 4: Additional Performance Optimization TechniquesDocuments similar to those returned in the search result; Sorting results by function values; Searching for homophones; Ignore the defined words from being searched; Summary; Chapter 5: Troubleshooting; Dealing with the corrupt index; Reducing the file count in the index; Dealing with the locked index; Truncating the index size; Dealing with a huge count of open files; Dealing with out-of-memory issues; Dealing with an infinite loop exception in shards; Dealing with expensive garbage collection Bulk updating a single field without full indexationSummary; Chapter 6: Performance Optimization with ZooKeeper; Getting familiar with ZooKeeper; Prerequisites for a distributed server; Aid your distributed system using ZooKeeper; Setting an ideal node count for ZooKeeper; Setting up, configuring, and deploying ZooKeeper; Setting up ZooKeeper; Configuring Zookeeper; Deploying ZooKeeper; Applications of ZooKeeper; Summary; Appendix; Index This book is an easy-to-follow guide, full of hands-on, real-world examples. Each topic is explained and demonstrated in a specific and user-friendly flow, from search optimization using Solr to Deployment of Zookeeper applications. This book is ideal for Apache Solr developers and want to learn different techniques to optimize Solr performance with utmost efficiency, along with effectively troubleshooting the problems that usually occur while trying to boost performance. Familiarity with search servers and database querying is expected

     

    Export in Literaturverwaltung   RIS-Format
      BibTeX-Format
    Hinweise zum Inhalt
    Quelle: Leibniz-Institut für Deutsche Sprache, Bibliothek
    Sprache: Englisch
    Medientyp: Ebook
    Format: Online
    ISBN: 9781782164838
    Auflage/Ausgabe: Online-Ausg.
    Schriftenreihe: EBL-Schweitzer
    Schlagworte: Client/server computing; Data mining; Open source software; Search engines -- Programming; Web search engines
    Umfang: Online-Ressource (1 online resource (124 p.))
    Bemerkung(en):

    Description based upon print version of record

  5. Search user interfaces
    Erschienen: 2009
    Verlag:  Cambridge Univ. Press, Cambridge [u.a.]

    Universitätsbibliothek Passau
    uneingeschränkte Fernleihe, Kopie und Ausleihe
    Export in Literaturverwaltung   RIS-Format
      BibTeX-Format
    Quelle: Verbundkataloge
    Sprache: Englisch
    Medientyp: Ebook
    Format: Online
    ISBN: 9781139640817; 9780521113793
    RVK Klassifikation: ST 281 ; AN 95000 ; ES 900
    Auflage/Ausgabe: 1. publ.
    Schlagworte: Web search engines; User interfaces (Computer systems); Human-computer interaction; Information Retrieval; Suchmaschine; Benutzerorientierung; Suche; Mensch-Maschine-Schnittstelle; Informationssystem; Recherche; Graphische Benutzeroberfläche; Benutzeroberfläche
    Umfang: 1 Online-Ressource (XVIII, 385 S.), Ill., graph. Darst.
    Bemerkung(en):

    Literaturverz. S. 329 - 364

  6. Search user interfaces
    Erschienen: 2009
    Verlag:  Cambridge Univ. Press, Cambridge [u.a.]

    Focuses on the human users of search engines and the tools available for interaction and visualization in searches mehr

    Staats- und Universitätsbibliothek Bremen
    keine Fernleihe
    Hochschulbibliothek Friedensau
    Online-Ressource
    keine Fernleihe

     

    Focuses on the human users of search engines and the tools available for interaction and visualization in searches

     

    Export in Literaturverwaltung   RIS-Format
      BibTeX-Format
    Hinweise zum Inhalt
    Volltext (Kostenfrei)
    Volltext (Connect to this resource online)
    Quelle: Verbundkataloge
    Sprache: Englisch
    Medientyp: Ebook
    Format: Online
    ISBN: 9780521113793
    RVK Klassifikation: ES 900
    Schlagworte: Web search engines; User interfaces (Computer systems); Human-computer interaction
    Umfang: Online-Ressource
    Bemerkung(en):

    Description based upon print version of record

    Cover; Title; Copyright; Dedication; Contents; Preface; 1 The Design of Search User Interfaces; 1.1 Keeping the Interface Simple; 1.2 A Historical Shift in Search Interface Design; 1.3 The Process of Search Interface Design; 1.4 Design Guidelines for Search Interfaces; 1.5 Offer Efficient and Informative Feedback; 1.6 Balance User Control with Automated Actions; 1.7 Reduce Short-Term Memory Load; 1.8 Provide Shortcuts; 1.9 Reduce Errors; 1.10 Recognize the Importance of Small Details; 1.11 Recognize the Importance of Aesthetics in Design; 1.12 Conclusions

    2 The Evaluation of Search User Interfaces2.1 Standard Information Retrieval Evaluation; 2.2 Informal Usability Testing; 2.3 Formal Studies and Controlled Experiments; 2.4 Longitudinal Studies; 2.5 Analyzing Search Engine Server Logs; 2.6 Large-Scale Log-Based Usability Testing (Bucket Testing); 2.7 Special Concerns with Evaluating Search Interfaces; 2.8 Conclusions; 3 Models of the Information Seeking Process; 3.1 The Standard Model of Information Seeking; 3.2 Cognitive Models of Information Seeking; 3.3 The Dynamic (Berry-Picking) Model; 3.4 Information Seeking in Stages

    3.5 Information Seeking as a Strategic Process3.6 Sensemaking: Search as Part of a Larger Process; 3.7 Information Needs and Query Intent; 3.8 Conclusions; 4 Query Specification; 4.1 Textual Query Specification; 4.2 Query Specification via Entry Form Interfaces; 4.3 Dynamic Term Suggestions During Query Specification; 4.4 Query Specification Using Boolean and Other Operators; 4.5 Query Specification Using Command Languages; 4.6 Conclusions; 5 Presentation of Search Results; 5.1 Document Surrogates; 5.2 KWIC, or Query-Oriented Summaries; 5.3 Highlighting Query Terms

    5.4 Additional Features of Results Listings5.5 The Effects of Search Results Ordering; 5.6 Visualization of Search Results; 5.7 Conclusions; 6 Query Reformulation; 6.1 The Need for Reformulation; 6.2 Spelling Suggestions and Corrections; 6.3 Automated Term Suggestions; 6.4 Suggesting Popular Destinations; 6.5 Relevance Feedback; 6.6 Showing Related Articles (More Like This); 6.7 Conclusions; 7 Supporting the Search Process; 7.1 Starting Points for Search; 7.2 Supporting Search History; 7.3 Supporting the Search Process as a Whole; 7.4 Integrating Search with Sensemaking; 7.5 Conclusions

    8 Integrating Navigation with Search8.1 Categories for Navigating and Narrowing; 8.2 Categories for Grouping Search Results; 8.3 Categories for Sorting and Filtering Search Results; 8.4 Organizing Search Results via Table-of-Contents Views; 8.5 The Decline of Hierarchical Navigation of Web Content; 8.6 Faceted Navigation; 8.7 Navigating via Social Tagging and Social Bookmarking; 8.8 Clustering in Search Interfaces; 8.9 Clusters vs. Categories in Search Interfaces; 8.10 Conclusions; 9 Personalization in Search; 9.1 Personalization Based on Explicit Preferences

    9.2 Personalization Based on Implicit Relevance Cues

    Electronic reproduction; Available via World Wide Web