Syntax

In linguistics, syntax (from Ancient Greek σύνταξις "coordination" from σύν syn, "together," and τάξις táxis, "an ordering") is "the study of the principles and processes by which sentences are constructed in particular languages."

In addition to referring to the discipline, the term syntax is also used to refer directly to the rules and principles that govern the sentence structure of any individual language. Modern research in syntax attempts to describe languages in terms of such rules. Many professionals in this discipline attempt to find general rules that apply to all natural languages.

Early history

Works on grammar were written long before modern syntax came about; the Aṣṭādhyāyī of Pāṇini (c. 4th century BC) is often cited as an example of a premodern work that approaches the sophistication of a modern syntactic theory.[2] In the West, the school of thought that came to be known as "traditional grammar" began with the work of Dionysius Thrax.

For centuries, work in syntax was dominated by a framework known as grammaire générale, first expounded in 1660 by Antoine Arnauld in a book of the same title. This system took as its basic premise the assumption that language is a direct reflection of thought processes and therefore there is a single, most natural way to express a thought.

However, in the 19th century, with the development of historical-comparative linguistics, linguists began to realize the sheer diversity of human language and to question fundamental assumptions about the relationship between language and logic. It became apparent that there was no such thing as the most natural way to express a thought, and therefore logic could no longer be relied upon as a basis for studying the structure of language.

The Port-Royal grammar modeled the study of syntax upon that of logic. (Indeed, large parts of the Port-Royal Logic were copied or adapted from the Grammaire générale.) Syntactic categories were identified with logical ones, and all sentences were analyzed in terms of "Subject – Copula – Predicate." Initially, this view was adopted even by the early comparative linguists such as Franz Bopp.
The central role of syntax within theoretical linguistics became clear only in the 20th century, which could reasonably be called the "century of syntactic theory" as far as linguistics is concerned. (For a detailed and critical survey of the history of syntax in the last two centuries, see the monumental work by Giorgio Graffi (2001).

Character Set


A defined list of characters recognized by the computer hardware and software. Each character is represented by a number. The ASCII character set, for example, uses the numbers 0 through 127 to represent all English characters as well as special control characters. European ISO character sets are similar to ASCII, but they contain additional characters for European languages.

Syntax GroupClick the link above

Designing Programming Languages for Reliability

his paper contains several comments and thoughts on designing programming languages so that programs tend to be more reliable. It is organized as a list of suggestions for anyone who is about to design a new language. Introduction
The Ariane-5 was a reusable space vehicle designed and manufactured in Europe. On its maiden flight, the vehicle was destroyed in the launch phase, due to a software error. An unhandled floating-point overflow was the problem. The software was written in Ada, the language which, several years ago, the U.S. Defense Department decreed was to used for all military systems.

Software errors have killed innocent people. In an early infamous example, a computer controlled the X-ray dosage in a medical machine used to treat cancer patients. A programming error caused the machine to give lethal doses during radiation treatment, instead of the much smaller doses actually prescribed. Unfortunately, the lethality of the doses was not immediately detected and several patients were affected.
On a smaller scale, home computers are crashing all the time. Perhaps you too have been annoyed by an unexpected, unexplained failure of software that you felt ought to function correctly.

Software reliability is clearly important and much research has been done on how to increase reliability. Nevertheless, software quality remains lower than most people find acceptable. What is the problem?
First, we must realize that reliability is costly and not every program requires the same level of reliability. The problem isn't that we can't produce programs with the desired level of reliability but that we don't put an appropriate emphasis on it. This is fundamentally an economic issue, not a technical issue.
Second, most programmers enjoy programming and the goal of making programming more enjoyable is often directly opposed to the approaches for making programs more reliable.
Third, some of the theoretical work on program correctness is too abstract. It may be too difficult for many programmers to understand and too costly for organizations to use. It may be overkill. For many applications, we want a small, incremental increase in reliability for a small, incremental increase in programmer effort.

My feeling is that the programming language itself has a huge impact on reliability. This document contains a number of thoughts on how we can design languages to encourage correct programming.
I'll show several examples in C and C++. I don't mean to pick on C and C++ in particular. After all, it was clearly stated in the design of C and C++ that tradeoffs were always made in the favor of efficiency and that the languages were inherently dangerous and difficult and meant only for experienced programmers, who were instructed "buyer beware." I'll use C and C++ since the problem of bugs is worse in those languages than other, more modern languages and since they are so widely known and used today.

Click here for more reference.