语料库语言学 维基百科
语料库语言学(英语:corpus linguistics)是基于语言运用的实例(即语料库)的语言研究。语料库语言学可以对自然语言进行语法与句法分析,还可以研究它与其他语言的关系。语料库最初由手工完成,而现在主要是由计算机自动完成。
语料库语言学家相信,可靠的语言分析需建立在新鲜的语料、自然的语言环境,和最小的实验干扰之上。在语料库语言学中,语料标注的意义众说纷纭,从约翰·辛克莱[1]主张最少量的标注,并允许文本“为自己说话”,到“英语用法调查组”(设在伦敦大学学院)[2]鼓励更多的标注,并认为它是通向更完备和严谨的语言理解的道路。
目录
o o o
1 历史 2 方法 3 参考文献
3.1 引用 3.2 期刊 3.3 书籍
4 外部链接 5 参见
历史[编辑] 现代语料库语言学的一个里程碑是亨利·库切拉和W.纳尔逊弗朗西斯在1967年出版的《当代美语的计算分析》(Computational Analysis of Present-Day American English)一书。该项工作基于对布朗语料库的分析,布朗语料库是一个精心编制的美国英语语料库,规模约有一百万词次。库切拉和弗朗西斯将这些语料用于各种计算分析,获得了丰富和多样化的成果,该成果结合了语言学、语言教、心理学、统计学、和社会学元素。另一关键出版物是1960年伦道夫·夸克的《当代英语语法》(Towards a description of English Usage)[3],在这本书中他介绍了“英语用法调查”项目(The Survey of English Usage)。
此后不久,波士顿出版商霍顿米夫林邀请库切拉为其新的美国传统英语字典提供百万词次,三线引文的来进行词典编纂。《美国传统英语字典》创新地将规定性元素(应如何使用语言)和描述性元素(语言实际上是如何被使用)结合在了一起。
其他出版社纷纷效仿。英国出版商柯林斯COBUILD单语学习词典,就是为非英语母语者学习英语而出版的,它使用了“英语银行”(Bank of English)语料库。“英语用法调查”语料库被用于由夸克等人编著的《综合英语语法》(A Comprehensive Grammar of the English Language)中。
布朗语料库也催生了类似的语料库:LOB语料库(Lancaster-Oslo-Bergen Corpus,20世纪60年代英国英语),科尔哈帕(Kolhapur,印度英语),惠灵顿(Wellington,新西兰英语),澳大利亚英语语料库(Australian Corpus of English,澳大利亚英语),皱眉语料库(Frown Corpus,20世纪90年代初,美国英语),以及FLOB语料库(FLOB Corpus,20世纪90年代,英国英语)。其他语料库包括国际英语语料库(International Corpus of English),和英国国家语料库(British National Corpus,收集了1亿词次的口头和书面语料,在20世纪90年代时由出版商、牛津大学、兰卡斯特大学和大英图书馆创建)。至于说到当代的美国英语,现已有了美国国家语料库(英语:American National Corpus),以及可以在线访问的4亿多词次的美国当代英语语料库(英语:Corpus of Contemporary American English,1990年创建)。
第一个电脑转录口语语料库,建于1971年蒙特利尔法语项目(Montreal French Project),[4]有一亿词次,这一项目还启发了夏娜·帕普拉克建立了规模更大的渥太华-赫尔地区法语口语语料库({{lang-en|Corpus of spoken French in the Ottawa-Hull area)。[5]
语料库除了收集现存语言,也收集古代语言。比如20世纪70年代建立的希伯来文圣经的安徒生福布斯数据库(英语:Andersen-Forbes database of the Hebrew Bible,数据库的
[6][7]每个子句的语法分析都使用了多达七级语构的图表,每一部分都标注了七个方面的信息。
古兰经阿拉伯语语料库(英语:Quranic Arabic Corpus)是古典的阿拉伯文《古兰经》的标注语料库。它包含多层次的标注,包括形态分割,词性标注,以及使用依存语法进行的句法分析。[8]
方法[编辑] 语料库语言学已经有了一大批研究方法,这些研究方法都试图找到从数据到理论的解决方案。瓦利斯和尼尔森[9]最先介绍了他们的3A观点(英语:3A perspective):注释(英语:Annotation),抽象(英语:Abstraction)和分析(英语:Analysis)。
注释 包括语料的数据库方案。注释可能包括结构标注,词性标注,句法分析和其他形式。
抽象 包括该方案在理论上的启发式模型或数据集中的翻译(映射)。抽象通常包括面向语言学家的定向搜索,但也可能包括句法研究者的句法规则学习。
分析 包括统计学探测,操纵和对数据集的归纳概括。分析可能包括统计学评估,规则库优化和知识探索方法。
如今大多数词汇语料库采用词性标注(英语:part-of-speech-tagged)。然而,即使是采用未标注语料的语料库语言学家也无疑会使用一些方法来从句子中隔离出他们感兴趣的词。在这种情况下,注释和抽象在词汇搜素中结合起来了。
发布标注语料库的优点是其他用户可以在语料库中进行研究与实验。语言学家与其他相关人士就可以利用语料库来工作了。通过数据共享,语料库语言学家能将语料库视为语言探讨的核心,而不是知识的源泉。
Corpus linguistics
From Wikipedia, the free encyclopedia
Corpus linguistics is the study of language as expressed in samples (corpora) of \"real world\" text. This method represents a digestive approach to deriving a set of abstract rules by which a natural language is governed or else relates to another language. Originally done by hand, corpora are now largely derived by an automated process.
Corpus linguistics adherents believe that reliable language analysis best occurs on field-collected samples, in natural contexts and with minimal experimental interference. Within corpus linguistics there are divergent views as to the value of corpus annotation, from John Sinclair[1] advocating minimal annotation and allowing texts to 'speak for themselves', to others, such as the Survey of English Usage team (based in University College, London)[2] advocating annotation as a path to greater linguistic understanding and rigour.
Linguistics Theoretical Cognitive Generative Quantitative Functional theories of grammar Phonology Morphology Morphophonology Syntax Lexis Semantics Pragmatics Graphemics Orthography Semiotics Descriptive Anthropological Comparative Historical Etymology Graphetics Phonetics Sociolinguistics Applied and experimental Computational Contrastive Evolutionary Forensic Internet Language acquisition Second-language acquisition Language assessment Language development Language education Linguistic anthropology Neurolinguistics Psycholinguistics Related articles History of linguistics Linguistic prescription List of linguists Unsolved linguistics problems Linguistics portal Contents [hide]
1 History 2 Methods 3 See also
V T E
o o o
4 References
4.1 Journals 4.2 Book series 4.3 Other
5 External links
History[edit]
Some of the earliest efforts at grammatical description were based at least in part on corpora of particular religious or cultural significance. For example, Prātiśākhya literature described the sound patterns of Sanskrit as found in the Vedas, and Pāṇini's grammar ofclassical Sanskrit was based at least in part on analysis of that same corpus. Similarly, the early Arabic grammarians paid particular attention to the language of the Quran. In the Western European tradition, scholars prepared concordances to allow detailed study of the language of the Bible and other canonical texts.
A landmark in modern corpus linguistics was the publication by Henry Kucera and W. Nelson Francis of Computational Analysis of Present-Day American English in 1967, a work based on the analysis of the Brown Corpus, a carefully compiled selection of current American English, totalling about a million words drawn from a wide variety of sources. Kucera and Francis subjected it to a variety of computational analyses, from which they compiled a rich and variegated opus, combining elements of linguistics, language
teaching,psychology, statistics, and sociology. A further key publication was Randolph Quirk's 'Towards a description of English Usage' (1960)[3] in which he introduced The Survey of English Usage.
Shortly thereafter, Boston publisher Houghton-Mifflin approached Kucera to supply a million word, three-line citation base for its new American Heritage Dictionary, the
first dictionary to be compiled using corpus linguistics. The AHD took the innovative step of combining prescriptive elements (how language should be used) with descriptive information (how it actually is used).
Other publishers followed suit. The British publisher Collins' COBUILD monolingual learner's dictionary, designed for users learning English as a foreign language, was
compiled using the Bank of English. The Survey of English Usage Corpus was used in the development of one of the most important Corpus-based Grammars, the Comprehensive Grammar of English (Quirk et al. 1985).[4]
The Brown Corpus has also spawned a number of similarly structured corpora: the LOB Corpus (1960s British English), Kolhapur (Indian English), Wellington (New Zealand English), Australian Corpus of English (Australian English), the Frown Corpus (early 1990s American English), and the FLOB Corpus (1990s British English). Other corpora represent many languages, varieties and modes, and include the International Corpus of English, and theBritish National Corpus, a 100 million word collection of a range of spoken and written texts, created in the 1990s by a consortium of publishers, universities
(Oxford and Lancaster) and the British Library. For contemporary American English, work
has stalled on the American National Corpus, but the 400+ million word Corpus of Contemporary American English (1990–present) is now available through a web interface. The first computerized corpus of transcribed spoken language was constructed in 1971 by the Montreal French Project,[5] containing one million words, which inspired Shana Poplack's much larger corpus of spoken French in the Ottawa-Hull area.[6]
Besides these corpora of living languages, computerized corpora have also been made of collections of texts in ancient languages. An example is the Andersen-Forbes database of the Hebrew Bible, developed since the 1970s, in which every clause is parsed using graphs representing up to seven levels of syntax, and every segment tagged with seven fields of information.[7][8] The Quranic Arabic Corpus is an annotated corpus for the Classical Arabic language of the Quran. This is a recent project with multiple layers of annotation including morphological segmentation, part-of-speech tagging, and syntactic analysis using dependency grammar.[9]
Methods[edit]
Corpus Linguistics has generated a number of research methods, attempting to trace a path from data to theory. Wallis and Nelson (2001)[10] first introduced what they called the 3A perspective: Annotation, Abstraction and Analysis.
Annotation consists of the application of a scheme to texts. Annotations may include structural markup,part-of-speech tagging, parsing, and numerous other representations.
Abstraction consists of the translation (mapping) of terms in the scheme to terms in a theoretically motivated model or dataset. Abstraction typically includes linguist-directed search but may include e.g., rule-learning for parsers.
Analysis consists of statistically probing, manipulating and generalising from the dataset. Analysis might include statistical evaluations, optimisation of rule-bases or knowledge discovery methods.
Most lexical corpora today are part-of-speech-tagged (POS-tagged). However even corpus linguists who work with 'unannotated plain text' inevitably apply some method to isolate salient terms. In such situations annotation and abstraction are combined in a lexical search.
The advantage of publishing an annotated corpus is that other users can then perform experiments on the corpus (through corpus managers). Linguists with other interests and differing perspectives than the originators' can exploit this work. By sharing data, corpus linguists are able to treat the corpus as a locus of linguistic debate, rather than as an exhaustive fount of knowledge.
Recent studies have suggested treatment outcome in adolescents with social anxiety disorder can also be assessed by analysing language by means of Corpus Linguistics [11]
因篇幅问题不能全部显示,请点此查看更多更全内容
Copyright © 2019- huatuowenda.com 版权所有 湘ICP备2023022495号-1
违法及侵权请联系:TEL:199 1889 7713 E-MAIL:2724546146@qq.com
本站由北京市万商天勤律师事务所王兴未律师提供法律服务