Saturday, May 23, 2020

The Economic System of South Africa - 1043 Words

What type of economic system does this country have? Explain some of the benefits of this system to the country and some of the drawbacks. South Africa’s economic is mainly based on free market principles. However, as in most developed economies, competition is controlled by government intervention. Therefore, South Africa has a mixed economy in which there is a variety of private freedom, combined with centralized economic planning and government regulation. ADVANTAGES: †¢ Various restrictions on businesses are made for the greater good. For instance, the Competition Act of 1998; along the lines of the best international practices; provides for various prohibitions on anti-competitive conduct and restrictive practices such as price fixing even predatory pricing. †¢ Both private and public sectors work side by side. The combined efforts lead to rapid economic development. The private sector’s goal is to make profit. For every dollar in revenue it earns, there is a tax that goes directly to some level of government. The public sector can then afford to continue providing its services to the people. Additionally citizens also pay taxes through either goods or services from the private sector or public sector. †¢ With international allegiances and international trade; imports and exports; it increases or expands opportunities for South African participation in world markets, while recognizing the role of foreign competition within South Africa. †¢ Mixed economies have a highShow MoreRelatedThe Four Main Economic Systems and Their Application in South Africa1426 Words   |  6 PagesIntroduction An economic system is a manner of approach which is used at solving the three main questions which are: What goods and services should be produced and the amount? How much of the scarce resource should be utilised and how will the goods and services will produced? For whom should the diverse goods and services be produced for and where will the production take place? Economic systems don’t always work accordingly but often so vast and complicated but on the contrast its working out justRead MoreFour Different Economic Systems and Which One I Think Best Suits South Africas Mixed Economy1448 Words   |  6 Pagesthe three key economic questions has lead to the evolvement of four different economics systems which i will critically analyse in this assignment. Furthermore, i will be critically discussing South Africa as a mixed economy as to why it is characterized as a mixed economy and the suitability of the economic system for the current economic conditions. â€Å"A government is not need to ensure the whole society’s welll-being† Adman Smith. 2.LITERARY REVIEW OF THE FOUR ECONOMIC SYSTEMS 2.1. TraditionalRead MoreWhy Literacy Rate Of South Africa Is Higher Than That Of West Africa? Essay1714 Words   |  7 Pages2016 Why literacy rate in South Africa is higher than that of West Africa? The countries in South Africa are already done much better in the development work along with education system by increasing the literacy rate in the country whereas the countries of West Africa are still too much back warded about the literacy in their society. These two parts of the continent are opposite to each other about the awareness and beneficial factor of education. In countries of South Africa, people are known as sociallyRead MoreHiv / Aids : Hiv And Aids1484 Words   |  6 PagesThroughout the history of South Africa, problems regarding the overall well-being of the country have arisen. One current issue South Africa is currently facing is HIV/AIDS. The disease has been plaguing South Africa as well as other countries throughout the continent. The initialism HIV stands for human immunodeficiency virus. This disease attacks and destroys the infection-fighting CD4 cells of the immune system. Loss of these cells makes it difficult for the body to fight infections. Without treatmentRead MoreThe Roots of Apartheid: South Africa’s Colonial Experience Essay1673 Words   |  7 PagesIn recent years, there have been efforts to understand the institution of apartheid in South Africa. From the Truth and Reconciliation Commission, to general study into the history of South Africa, much scholarship has been devoted to the study of the effects of apartheid and the atrocities committed in the post-World War II period. However, one topic remains largely un-researched—the origins of the vast apartheid structure instituted by the Herenigde (Reunited) National Party (HNP) in the late 1940’sRead MoreThe Negative Effects of Globalization on South Africa Essay1699 Words   |  7 Pages Globalisation refers to the process of the integration of economic, political, social and cultural relations among people, companies and governments of different nations and countries. It is a process aimed improving international movement of goods, services, labour and capital. This process also has a direct impact on the environment, culture, political systems, economic development and prosperity, and a human physical wellbeing of societies in the world. Read MoreAdvantages and Disadvantages of Globalization on South Africa767 Words   |  4 Pagesundeveloped.South Africa stands as a semi-sephere nation making it differ from the rest of the developing world although it has party some charectateristics of a deceloping world.For South Africa to be the strongest African economy and attenting positions such as being a member of the g8 as been a clearl work of globalization making it at the center of the Africa.Globalization has managed to have an i mpact on the economy ,politics and social nature bringing about positive results for South Africa.In thisRead MoreSlavery, Colonialism and Capitalism783 Words   |  3 Pagesrelationship between these three systems. There are many different views on this topic, the main views being the Liberal-pluralists and the Radical revisionists who understand this relationship from different perspectives. To prove the connection between these three systems that impacted many countries this essay shall make close reference to a number of sources. Cedric Robinson (1984: 57) discusses the fact that slavery lead to the growth in capitalism as the sole goal of this system is to make a profit; theRead MoreTypes of Unemployment, Labour Relations, Trade Unions in South Africa and Other Concepts982 Words   |  4 Pages and cyclical unemployment 1.2.1) BRICS-Brazil, Russia, India, China, South Africa .This countries represent the 5 declaration of countries. 1.2.2) the purpose of BRICS is to let countries interact with each other. The involvement of globalisation among countries is important. South Africa produces mineral which contribute a lot to the BRICS resources. They must engage functionally on their variable approaches to economic development, in detail the balance accorded to markets set against the stateRead MoreForeign Policy : The Transition Of Democracy1039 Words   |  5 PagesForeign policy involves the goals, strategies, measures, understanding, agreements, directives and rules in which national governments conduct international relations with each other as well as international organisations and non-governmental actors. South Africa s post-apartheid foreign policy vision has become prosperous, peaceful, democratic, non-racial, non-sexist and united which contributes to the world that is equitable. This essay will discuss the transition to democracy and how the different heads

Monday, May 18, 2020

Jean Piaget s Theory And Theory - 1673 Words

What is a theory? A theory is an organized set of ideas that is designed to explain development. These are essential for developing predictions about behaviors and predictions result in research that helps to support or clarify the theory. The theorist I am choosing to talk about is Jean Piaget who discovered the cognitive development theory and who broke it down into different stages. The different stages are the sensorimotor stage, the preoperational stage, the concrete operational thought, and the formal operational thought. To sum up Piaget’s theory he believes children learn more about how the world works by little experiments in which they test their understanding. The stages he broke the theory down into are in which children understand their surroundings and become more advance and accurate with age. Who is Jean Piaget? Piaget had problems publishing some of his works because of the fact that he was so young. Throughout his life, he had many offers and advanced quickly in everything that he did. In 1921, Piaget was invited by Claparede to become the director of research at the Jean-Jacques Rousseau Institute in Geneva (Presnell). Here, he would work in the field of child psychology and guide students. He planned to study the emergence of intelligence for the first two years and then return to the origins of mental health. The results of his work were published in the first five books on child psychology. It was during this time that he met ValentineShow MoreRelatedJean Piaget s Theory And Theory1424 Words   |  6 Pages Jean Piaget studied processes, and how children change with age. Piaget’s ideas are what serve as our guide to cognitive theory because of his extensive studies, and thoroughness of his work. He became the foremost expert on development of knowledge from birth to adul thood. Being that he was an expert on such a wide variety of ages shows how much studying he did in his lifetime. He studied children, and became fascinated with children’s incorrect responses. That really shows me that PiagetRead MoreJean Piaget s Cognitive Theory Essay1750 Words   |  7 Pages Jean Piaget is a well-renowned twentieth century scholar responsible for the development of the Cognitive Theory, focusing on how people think over time, which, in turn, reflects in how how attitudes, beliefs, and behaviors are shaped. Jean Piaget observed and divided the Cognitive Theory into four periods of cognitive development, which occur in the following order: sensorimotor, preoperational, concrete operational, and formal operational. Of the four stages, each has it’s own characteristicsRead MoreJean Piaget s Theory Of Children1465 Words   |  6 Pagesbasis with your child will help them with their cognitive thinking. Jean PiagetÅ› developmental theory is children develop in 4 different stages from the ages birth to adulthood. Jean Piaget was born in Switzerland and was the oldest child. He was born on August 9th, 1896 and died on September 16th, 1980. Jean Piaget was a swiss psychologist who started off by becoming a well known malacologist after he finished high school. Piaget left Switzerland and moved to France because he had an interest inRead MoreJean Piaget s Theory Of Knowing921 Words   |  4 PagesJean Piaget was a developmental psychologist and philosopher from Switzerland. He is known for his epistemological studies with children. He was the first to make a systematic study of cognitive development. Piaget was also the Director of the International Bureau of Education. He was â€Å"the great pioneer of the constructivist theory of knowing.† He was known as the second best psychologist after Skinner by the end of the 20th century. Throughout his career, Jean Piaget declared that â€Å"only educationRead MoreJean Piaget s Theory Of Psychology744 Words   |  3 PagesJustin Waite The Study of Jean Piaget 11/16/2015 Born on August 9, 1986 in Neuchatel, Switzerland, Jean Piaget was one of the most influential theorist in the field of early childhood development and psychology that ever existed. His input towards human intelligence is second to none. Piaget learned the value of hard work from his father who was a medieval history writer. His mother was also very intelligent. Although she was a very bright and energetic individual, she was also mentallyRead MoreJean Piaget s Theory Of Psychology956 Words   |  4 PagesJean Piaget (1896-1980) was one of the most influential researchers in the area of developmental psychology during the 20th century. Piaget originally trained in the areas of biology and philosophy and considered himself a genetic epistemologist. He was mainly interested in the biological influences on how we come to know. He believed that what distinguishes human beings from other animals is our ability to do abstract symbolic reasoning. Piaget s views are often compared with those of LevRead MoreJean Piaget s Theory Of Psychology1125 Words   |  5 PagesDecember 2014 Jean Piaget Throughout history, many people have made contributions to the school of psychology. One of those most noted, was Jean Piaget, and his theories on the cognitive development stages. Jean Piaget was born in Neuchatel, Switzerland. Here he studied at the university and received a doctorate in biology at the age of 22. Following his schooling he became increasingly interested in psychology and began his research and studying of the subject. From this research Piaget created aRead MoreErik Erikson And Jean Piaget s Theories Essay1291 Words   |  6 PagesErik Erikson and Jean Piaget are quite similar in their theories. Jean Piaget’s cognitive theory is only slightly different than Erik Erikson’s psychosocial theory. Both theorists use the idea of developmental stages. Although the stages vary in what they entail, the carry the same idea of progressive development. Jean Piaget was born September 16, 1980, in Switzerland. His research found â€Å"that the growth of knowledge is a progressive construction of logically embedded structures superseding oneRead MoreJean Piaget s Theories Of Cognitive Development1360 Words   |  6 Pages Jean Piaget was a Swiss psychologist. He worked in the fields of Developmental Psychology and Epistemology. He’s known for his works and theories in the field of child development. His theories of cognitive development and epistemological views are called, â€Å"genetic epistemology†. Piaget placed the education of children as most important. His works and theories still play a huge role and influence the study of child psychology today. Jean Piaget was born on August 9, 1896 in Neuchatel, SwitzerlandRead MoreJean Piaget s Theory Of Cognitive Development1607 Words   |  7 PagesShaquille Ross Professor Morris Piaget Theory Jean Piaget s theory of cognitive development gives a broader way of explaining the way of how the process of thinking is developed, based off of different age groups. He became interested in how organisms adapt and conform to its environment. He believe that it was labelled as intelligence. He observed these behaviors by controlling them through schema or schemes. In other words, Piaget organized experiments that are based off of intellectual

Tuesday, May 12, 2020

How and Why Is a Social Group Represented in a Particular...

Which social groups are marginalized, excluded or silenced in the text? Outline: * Show how Marjane Satrapi grew up under oppression during the Islamic Revolution in Iran. * Give and explain evidence of how the author presents that different social groups were marginalized/silenced. * Show how Marji and her parents shared the same beliefs when making reference to the regime. The graphic novel Persepolis, by Marjane Satrapi, explores her childhood years in the middle of the Islamic Revolution. Situated in the commotion of the overthrowing of the Shahs regime, and the war with Iraq, the reader learns how secularists, nationalists and even Muslims marginalized, excluded and silenced the modernists in Iran during the Islamic†¦show more content†¦Therefore, the reader can intuit that the degree of marginalization and violence increased as social status decreased. Besides focusing on Marji’s own troubles of growing up during the Revolution, she also remarks her parents’ struggle with the ruling Islamic Party. She comes to realize that her parents’ beliefs are opposite to those of the regime. While her parents drink alcohol, have parties and enjoy a wealthy lifestyle, the Guards of the Revolution control this behaviour. Marji ´s parents share her rebellious spirit: they also want to have secret parties, break the law and dress however they want to. In one frame Marji helps her mother to empty the alcohol down the toilet, since the police threaten to search their department (p.110). In another frame, Marji’s mother puts tape on the windows as a safeguard against the Iraqi bombings, and black curtains to prevent the neighbours from seeing their parties (p.105) There is a parallelism at play between the upper classes of the revolution and the lower classes, although her parents revolt on a daily basis and share the sa me beliefs, upon returning home they can still try to enjoy secret pleasures in relative safety, whereas the lower classes are not afforded any means of escape. Satrapi also criticizes Muslims for keeping the religious regime in power. She shows how self-mutilation was taken to extremes during the revolution by fundamentalists. In one scene, Marji stands up to her teacher and tells her toShow MoreRelatedHow and Why Is a Social Group Represented in a Particular Way? Persepolis1442 Words   |  6 PagesWhich social groups are marginalized, excluded or silenced in the text? Outline: * Show how Marjane Satrapi grew up under oppression during the Islamic Revolution in Iran. * Give and explain evidence of how the author presents that different social groups were marginalized/silenced. * Show how Marji and her parents shared the same beliefs when making reference to the regime.    The graphic novel Persepolis, by Marjane Satrapi, explores her childhood years in the middle of the IslamicRead MoreVisual Rhetoric in Persepolis1006 Words   |  5 PagesSatrapi, Marjane. Persepolis. Pantheon. New York. 2003 Question: How and why is a social group represented in a particular way? The Display of Revolutionists in Marjane Satrapi’s Persepolis In Marjane Satrapi’s Persepolis there are several important social groups that all play a role towards creating the whole picture that describes little Marji’s everyday life in 1970s Iran. The nature of the revolution during that time created a huge divide between the different social groups. This was dominatedRead More Persepolis: Changing Western Perceptions of Muslim Women Essay1756 Words   |  8 PagesMarjane Satrapi’s graphic novel, Persepolis, makes important strides toward altering how Western audiences perceive Iranian women. Satrapi endeavors to display the intersection of the lives of some Westerners with her life as an Iranian, who spent some time in the West. Satrapi, dissatisfied with representations she saw of Iranian women in France, decided to challenge them. In her words, â€Å"From the time I came to France in 1994, I was always telling stories about life in Iran to my friends. We’d see

Wednesday, May 6, 2020

English Lit. Pretest Essay - 597 Words

1. Which of the following is a planning technique? (Points : 5) Drafting Revising Proofreading Clustering 2. Analytical reading will be hindered by _____. (Points : 5) Annotating a text in the margins. Previewing a text by skimming. Reading only the abstract of a text. Discussing a text with a classmate. 3. Which of the following is a revising technique? (Points : 5) Editing for grammar Correcting punctuation Reordering paragraphs Checking for spelling 4. In a _____ essay, all supporting details clearly relate to the thesis. (Points : 5) Transitional Unified Stratified Simplified 5. Which statement is†¦show more content†¦Please bring: silverware, beverages, and a dish to share to the potluck. 10. Using words or ideas in a paper without properly crediting the source is ___. (Points : 5) Plagiarizing Cheating Stealing All of the above 11. Which of the following should be documented in a research essay? (Points : 5) Paraphrase Direct quote Summary All of the above 12. All of the following are acceptable sources for an academic research paper except for ______. (Points : 5) Wikipedia Peer-reviewed website Personal interview Accredited journal 13. Which of the following is the best strategy for linking evidence to your ideas in an argument? (Points : 5) Presenting one side of an argument Including information from a website Including expert testimony Including logical fallacies 14. When writing an argumentative essay, which of the following should be avoided? (Points : 5) Presenting only one side of the argument Presenting opposing viewpoints Rebutting differing viewpoints Pointing out common ground 15. In order to create credibility in analytical writing, which one of the following writing strategies might a writer use? (Points : 5) Extended analogy Distended definition Hearsay evidence Abstract detail 16. Academic writing often involves crafting a ________, which can beShow MoreRelatedItsc 2439 Ch1-12 Study Guides Essay28023 Words   |  113 Pagesability to design an effective local area network is a critical job skill for user support staff members. ____ 50. A task that a worker can either perform or not perform is called essential knowledge. ____ 51. The ability to write documentation in English is an important job skill for user support staff members. ____ 52. A task that an employee can get better at with additional training or experience is called a job skill. ____ 53. Most user support staff members receive on-the-job training and continuingRead MoreDeveloping Management Skills404131 Words   |  1617 Pagesonline assessment and preparation solution for courses in Pr inciples of Management, Human Resources, Strategy, and Organizational Behavior that helps you actively study and prepare material for class. Chapter-by-chapter activities, including built-in pretests and posttests, focus on what you need to learn and to review in order to succeed. Visit to learn more. DEVELOPING MANAGEMENT SKILLS EIGHTH EDITION David A. Whetten BRIGHAM YOUNG UNIVERSITY Kim S. Cameron UNIVERSITY

Accomplishments of Ancient Rome Free Essays

In the 3,000 years that make up the ancient history of the emergence of Western Civilization, Rome’s contributions to society include the construction of bridges, domes, and temples. The Romans had great architecture skills that have stayed with us in one form or another for thousands of years. Each construction has evolved into many different forms that are found all over the world today. We will write a custom essay sample on Accomplishments of Ancient Rome or any similar topic only for you Order Now Each country or civilization uses the items differently, but without the help from the Romans and Greeks transportation may have been harder to accomplish and buildings would not have the beauty they have today. Short bridges are not hard to build. They can easily be made by throwing a log across a narrow stream or river. It’s building a bridge across a wide river that can be difficult. Building a straight bridge across a wide gap can be unsafe and unsecure. The Roman’s invented a bridge in the form of an arch. This caused the bridge to be better equipped to handle heavy weight without having to put many supports in the water itself. Like all inventions, the first few arch bridges had flaws but they were worked out and now many bridges across the world have an arch like structure to them. The arch structure can also be found in historical buildings because of its beauty and uniqueness. The Romans were very proud of their accomplishment and they used it whenever they could. The Ancient Romans were the first to construct the dome. The Pantheon was an important building built in Ancient Greece that contained a dome. It is very noticeable from the exterior of the building. The dome of the Pantheon is one of the largest masonry domes every built. A heavy concrete base supports the weight, while the upper walls and dome are constructed of a lighter mix of concrete. The center of the dome has an opening which allows light and rain to enter. Many buildings and houses right here in New York have a roof with a dome shape to it. It is really popular in old Victorian homes. The Ancient Romans were not the first to construct temples but they contributed their own ideas to the structure. Some temples, such as the Temple of Saturn, have been rebuilt many times. Eight Ionic columns still remain on the Temple of Saturn today. Romans often didn’t include the fluting from the column shafts. Roman temples had columns and many pieces of artwork hung throughout the buildings showing Roman life just like the temples of Greece. Every building design starts off very basic and excels into something great and powerful. Each architectural design was created for some reason or another. If for some reason the Romans didn’t create the arch, dome or temples it doesn’t mean that it would never been created. They were just the first ones who needed or desired them first. I’m sure someone; somewhere would have had the idea of creating such beautiful pieces of architect. But as it stands today, we thank the Romans and Greeks for our earliest forms of advanced architecture. How to cite Accomplishments of Ancient Rome, Papers

Income Inequality in New Zealand free essay sample

Inequality in New Zealand The purpose of this report is to examine inequality and inequity in New Zealand income between ethnicity, gender and education. It will look at the positive and negative effects in income inequality. Inequality is the unequal distribution of household or individual income across various participants in an economy and inequity is unfairness involving favourtism and bias. To conduct my investigation I looked at articles and websites which contained information which was recent and relevent to domestic New Zealand inequality. The Gini Coefficient, a standard measure of income inequality that ranges from zero (everyone has identical incomes) to 1 (all incomes goes to only one person) rose by 4% in New Zealland along with 16 of the 22 OECD countries from mid 1990 to the late 2000s from the average of 0. 29, from 0. 27 to 0. 34 for New Zealand. 1 This means that inequality has increased in the country moving the Lorenz curve for New Zealand outward into a greater curve. The curve shows that a greater percentage of wealth is owned by the top decile of the population, indicating that the rich are getting richer while the poor are getting poorer. Impacts of the recession in terms of job losses impacted disproportionately those with low income, which means Maori and Pacific people as they are disproportionately represented in those lower incomes. There was an increase in European income from $569 a week during the recession to $580 this year while Maori experienced a sharp drop in income, down $40 to $459 and Pacific people, down $65 to $390. Maori unemployment rose from 10. 2% in March 2008 to 14. 8% in March 2012, Pacific unemployment rose from 8. 7% to 14. 7% while European unemployment only rose by 3% to 4. %. A maturing Asian population caused a large increase in the median income for Asians from $344 a week to $405. 2 In 2006 the mean income for Maori was 73% of Non-Maori median income and 85. 7% of the mean income of all residents, the Pacific median was only 84% of the total median income. 3 This shows that there is inequity in income based on ethnicity in New Zealand as the rises and drops of income is inconsistent th rough racial groups, with European and Asian income increasing while Maori and Pacific people income rates decreasing after the recession. In 2008 a quality of life survery said 11% European, 17% Maori and 23% of Pacific people said they did not have enough to cover everyday needs. 4 There is also evident income inequity due to gender. There is a definite income gap between males in females when comparing by profession. The New Zealand census of womens participation census found a gender pay gap in the public sector of 38. 81% in Defence, 29% in Treasury, 27. 2% in the office of the Prime Minister and Cabinet and 14. 9% in the Department of Labour. The gap also widens through time spent in employment. One year after entering employment the average 1 2 3 4 5 http://ips. ac. nz Income gap between races gets wider – NZHerald. co. nz http://www. teara. govt. nz http://www. teara. govt. nz http://union. org. nz By Alla Rull income gap between men and women with a bachelor qualification or above was around 6%, after 5 years the average gap had increased to 17%. 6 The curve for income growth also evens out much quicker for women compared to men. The income evens out for women at the average age of 39 while males surpass females by 9 years with an average wage higher by $35000. 7 Income inequality is also present North Shore between parts of the country. In population race distribution New Zealander Auckland, South Auckland had the European lowest median personal income in 2006 Maori of $24200 while the North Shore had Pacific the highest at $29100. West Auckland Asian had the second lowest at $26100 and Other Auckland City the second highest at $28000. South Auckland also has the highest percentage of Maori and Pacific South Auckland people compared to the other parts of population race distribution New Zealander Auckland with 15% maori and 28% European Pacifica. The North Shore had the Maori lowest percentage of Maori at 6% and Pacific 8 Pacific people at 3%. This reflects the Asian average income and the income data of Other Maori and Pacific islanders compared to Europeans and Asians. The inequality of income in the different parts of the region may also reflect the opportunities available in the part of auckland as opportunities are one of the leading causes of inequality. West Auckland and South Auckland were designed as residential areas and have less industrial opportunities compared with the city and North Shore. Early life opportunities also affect the inequality, schools in the lower income areas of Auckland have lower deciles and therefore have less money available to the school to offer students opportunities compared to higher decile schools. The most common income inequality is incomes earned by people with different qualifications. A typical bachelor degree recipient can expect to earn 73% more over a 40 year working life than the typical high school graduate and the average lifetime earnings for doctoral degree recipients are between 2 and a half and 3 times as high as the average lifetime earnings 6 http://union. org. nz 7 Payscale. com 8 http://www. enz. org By Alla Rull for highschool graduates. 9 The graph shows that at higher levels of qualifications the average yearly pay is increased with Lvl 10 Doctorate and Lvl 9 Masters learning about twice as much as a Lvl 1-3 Certificate and a Lvl 4 Cetrificate. This is due to the specialisation of the high level degrees and the amount of training and time put into the learning meaning fewer people complete them causing a smaller supply for careers needing those degrees compared to jobs which do not need any qualifications and therefore the wage rate would be at a high equilibrium point making it higher than the non qualification jobs. Income inequality may be seen as a negative in a economy as it creates a major gap between the upper and lower class which may be hard to cross. The New Zealand Living Standards 2004 report showed a million New Zealanders living in some degree of hardship, with a quarter of these in severe hardship. Despite the buoyant economy and falls in unemployment levels, not only was there a slight increase in the overall percentage of those living in poverty between 2000 and 2004, but those with the most restricted living standards had slipped deeper into poverty (poverty defined as exclusion from the minimum acceptable way of life in ones own society because of inadequate resources). 0 Income inequality is taking opportunities away from those less fortunate to be born into a wealthy family. It is seen as negative because it is caused by unfair bias based on race or sex which do not affect the persons ability to perform. However the increase in income inequality may also have positive effects. It acts as a motivator for the population to gain higher educations so they can earn high salaries, this may also make New Zealand workers more demanded over seas due to the high qualifications of the country, it promotes education as higher qualifications are needed to obtain the higher ncome careers. Greater inequality may also indicate that the economy is booming and higher skilled jobs are becoming more and more demanding within the country pushing up the wage rate for highly qualified jobs. Another big reason inequality can be seen as a positive is the inequality between job wages, not having the same pay for a highly skilled career such as a doctor and a low skill job such as a janitor means the individual is payed a fair amount for their time spent training and added qualifications which are needed for the job. http://www. sd. govt. nz/about-msd-and-our-work/publications-resources/journals-and-magazines/socialpolicy-journal/spj37/37-perceptions-of-poverty-and-income-inequalities. html http://ips. ac. nz/publications/files/d3ffbb25bdb. pdf http://www. teara. govt. nz/en/ethnic-inequalities/5 http://www. msd. govt. nz http://www. nzinstitute. org/index. php/nzahead/measures/income_inequality/ http://www. collegeboard. com/prod_downloads/press/cost04/EducationPays2004. pdf http://www. enz. org/nz-cities-compared. html http://union. org. nz/payequity http://www2. areers. govt. nz/jobs-database/whats-happening-in-the-job-market/who-earns-what/ http://union. org. nz/vote-fairness/growing-gap-between-rich-and-poor http://tvnz. co. nz/national-news/income-gap-widens-faster-in-new-zealand-4599042 http://www. american. com/archive/2007/may-june-magazine-contents/the-upside-of-income-inequality http://www. dol. govt. nz/pdfs/op2000-1main. pdf nzhearld. com – Income gap between the races gets wider http://www. enz. org Payscale. com 9 Collegeboard. com 10 http://www. msd. govt. nz By Alla Rull

Sunday, May 3, 2020

Documentation on inventory system free essay sample

Software prototyping, refers to the activity of creating prototypes of software applications, i. e. , incomplete versions of the software program being developed. It is an activity that can occur in software development and is comparable to prototyping as known from other fields, such as mechanical engineering or manufacturing. A prototype typically simulates only a few aspects of, and may be completely different from, the final product. Prototyping has several benefits: The software designer and implementer can get valuable feedback from the users early in the project. The client and the contractor can compare if the software made matches the software specification, according to which the software program is built. It also allows the software engineer some insight into the accuracy of initial project estimates and whether the deadlines and milestones proposed can be successfully met. The degree of completeness and the techniques used in the prototyping have been in development and debate since its proposal in the early 1970s. [6] Contents [hide] 1 Overview 2 Outline of the prototyping process 3 Dimensions of prototypes 3. 1 Horizontal Prototype 3. 2 Vertical Prototype 4 Types of prototyping 4. 1 Throwaway prototyping 4. 2 Evolutionary prototyping 4. 3 Incremental prototyping 4. 4 Extreme prototyping 5 Advantages of prototyping 6 Disadvantages of prototyping 7 Best projects to use prototyping 8 Methods 8. 1 Dynamic systems development method 8. 2 Operational prototyping 8. 3 Evolutionary systems development 8. 4 Evolutionary rapid development 8. 5 Scrum 9 Tools 9. 1 Screen generators, design tools Software Factories 9. 2 Application definition or simulation software 9. 3 Requirements Engineering Environment 9. 4 LYMB 9. 5 Non-relational environments 9. 6 PSDL 10 Notes 11 References [edit]Overview The original purpose of a prototype is to allow users of the software to evaluate developers proposals for the design of the eventual product by actually trying them out, rather than having to interpret and evaluate the design based on descriptions. Prototyping can also be used by end users to describe and prove requirements that developers have not considered, and that can be a key factor in the commercial relationship between developers and their clients. [1] Interaction design in particular makes heavy use of prototyping with that goal. This process is in contrast with the 1960s and 1970s monolithic development cycle of building the entire program first and then working out any inconsistencies between design and implementation, which led to higher software costs and poor estimates of time and cost. [citation needed] The monolithic approach has been dubbed the Slaying the (software) Dragon technique, since it assumes that the software designer and developer is a single hero who has to slay the entire dragon alone. Prototyping can also avoid the great expense and difficulty of changing a finished software product. The practice of prototyping is one of the points Fred Brooks makes in his 1975 book The Mythical Man-Month and his 10-year anniversary article No Silver Bullet. An early example of large-scale software prototyping was the implementation of NYUs Ada/ED translator for the Ada programming language. [2] It was implemented in SETL with the intent of producing an executable semantic model for the Ada language, emphasizing clarity of design and user interface over speed and efficiency. The NYU Ada/ED system was the first validated Ada implementation, certified on April 11, 1983. [3] [edit]Outline of the prototyping process The process of prototyping involves the following steps 1. Identify basic requirements Determine basic requirements including the input and output information desired. Details, such as security, can typically be ignored. 2. Develop Initial Prototype The initial prototype is developed that includes only user interfaces. (See Horizontal Prototype, below) 3. Review The customers, including end-users, examine the prototype and provide feedback on additions or changes. 4. Revise and Enhance the Prototype Using the feedback both the specifications and the prototype can be improved. Negotiation about what is within the scope of the contract/product may be necessary. If changes are introduced then a repeat of steps #3 and #4 may be needed. [edit]Dimensions of prototypes Nielsen summarizes the various dimension of prototypes in his book Usability Engineering [edit]Horizontal Prototype A common term for a user interface prototype is the horizontal prototype. It provides a broad view of an entire system or subsystem, focusing on user interaction more than low-level system functionality, such as database access. Horizontal prototypes are useful for: Confirmation of user interface requirements and system scope Demonstration version of the system to obtain buy-in from the business Develop preliminary estimates of development time, cost and effort. [edit]Vertical Prototype A vertical prototype is a more complete elaboration of a single subsystem or function. It is useful for obtaining detailed requirements for a given function, with the following benefits: Refinement database design Obtain information on data volumes and system interface needs, for network sizing and performance engineering Clarifies complex requirements by drilling down to actual system functionality [edit]Types of prototyping Software prototyping has many variants. However, all the methods are in some way based on two major types of prototyping: Throwaway Prototyping and Evolutionary Prototyping. [edit]Throwaway prototyping Also called close-ended prototyping. Throwaway or Rapid Prototyping refers to the creation of a model that will eventually be discarded rather than becoming part of the final delivered software. After preliminary requirements gathering is accomplished, a simple working model of the system is constructed to visually show the users what their requirements may look like when they are implemented into a finished system. Rapid Prototyping involved creating a working model of various parts of the system at a very early stage, after a relatively short investigation. The method used in building it is usually quite informal, the most important factor being the speed with which the model is provided. The model then becomes the starting point from which users can re-examine their expectations and clarify their requirements. When this has been achieved, the prototype model is thrown away, and the system is formally developed based on the identified requirements. [7] The most obvious reason for using Throwaway Prototyping is that it can be done quickly. If the users can get quick feedback on their requirements, they may be able to refine them early in the development of the software. Making changes early in the development lifecycle is extremely cost effective since there is nothing at that point to redo. If a project is changed after a considerable work has been done then small changes could require large efforts to implement since software systems have many dependencies. Speed is crucial in implementing a throwaway prototype, since with a limited budget of time and money little can be expended on a prototype that will be discarded. Another strength of Throwaway Prototyping is its ability to construct interfaces that the users can test. The user interface is what the user sees as the system, and by seeing it in front of them, it is much easier to grasp how the system will work. †¦it is asserted that revolutionary rapid prototyping is a more effective manner in which to deal with user requirements-related issues, and therefore a greater enhancement to software productivity overall. Requirements can be identified, simulated, and tested far more quickly and cheaply when issues of evolvability, maintainability, and software structure are ignored. This, in turn, leads to the accurate specification of requirements, and the subsequent construction of a valid and usable system from the users perspective via conventional software development models. [8] Prototypes can be classified according to the fidelity with which they resemble the actual product in terms of appearance, interaction and timing. One method of creating a low fidelity Throwaway Prototype is Paper Prototyping. The prototype is implemented using paper and pencil, and thus mimics the function of the actual product, but does not look at all like it. Another method to easily build high fidelity Throwaway Prototypes is to use a GUI Builder and create a click dummy, a prototype that looks like the goal system, but does not provide any functionality. Not exactly the same as Throwaway Prototyping, but certainly in the same family, is the usage of storyboards, animatics or drawings. These are non-functional implementations but show how the system will look. SUMMARY:-In this approach the prototype is constructed with the idea that it will be discarded and the final system will be built from scratch. The steps in this approach are: 1. Write preliminary requirements 2. Design the prototype 3. User experiences/uses the prototype, specifies new requirements 4. Repeat if necessary 5. Write the final requirements 6. Develop the real products [edit]Evolutionary prototyping Evolutionary Prototyping (also known as breadboard prototyping) is quite different from Throwaway Prototyping. The main goal when using Evolutionary Prototyping is to build a very robust prototype in a structured manner and constantly refine it. The reason for this is that the Evolutionary prototype, when built, forms the heart of the new system, and the improvements and further requirements will be built. When developing a system using Evolutionary Prototyping, the system is continually refined and rebuilt. †¦evolutionary prototyping acknowledges that we do not understand all the requirements and builds only those that are well understood. [5] This technique allows the development team to add features, or make changes that couldnt be conceived during the requirements and design phase. For a system to be useful, it must evolve through use in its intended operational environment. A product is never done; it is always maturing as the usage environment changes†¦we often try to define a system using our most familiar frame of referencewhere we are now. We make assumptions about the way business will be conducted and the technology base on which the business will be implemented. A plan is enacted to develop the capability, and, sooner or later, something resembling the envisioned system is delivered. [9] Evolutionary Prototypes have an advantage over Throwaway Prototypes in that they are functional systems. Although they may not have all the features the users have planned, they may be used on an interim basis until the final system is delivered. It is not unusual within a prototyping environment for the user to put an initial prototype to practical use while waiting for a more developed version†¦The user may decide that a flawed system is better than no system at all. [7] In Evolutionary Prototyping, developers can focus themselves to develop parts of the system that they understand instead of working on developing a whole system. To minimize risk, the developer does not implement poorly understood features. The partial system is sent to customer sites. As users work with the system, they detect opportunities for new features and give requests for these features to developers. Developers then take these enhancement requests along with their own and use sound configuration-management practices to change the software-requirements specification, update the design, recode and retest. [10] [edit]Incremental prototyping The final product is built as separate prototypes. At the end the separate prototypes are merged in an overall design. [edit]Extreme prototyping Extreme Prototyping as a development process is used especially for developing web applications. Basically, it breaks down web development into three phases, each one based on the preceding one. The first phase is a static prototype that consists mainly of HTML pages. In the second phase, the screens are programmed and fully functional using a simulated services layer. In the third phase the services are implemented. The process is called Extreme Prototyping to draw attention to the second phase of the process, where a fully functional UI is developed with very little regard to the services other than their contract. [edit]Advantages of prototyping There are many advantages to using prototyping in software development – some tangible, some abstract. [11] Reduced time and costs: Prototyping can improve the quality of requirements and specifications provided to developers. Because changes cost exponentially more to implement as they are detected later in development, the early determination of what the user really wants can result in faster and less expensive software. [8] Improved and increased user involvement: Prototyping requires user involvement and allows them to see and interact with a prototype allowing them to provide better and more complete feedback and specifications. [7] The presence of the prototype being examined by the user prevents many misunderstandings and miscommunications that occur when each side believe the other understands what they said. Since users know the problem domain better than anyone on the development team does, increased interaction can result in final product that has greater tangible and intangible quality. The final product is more likely to satisfy the users desire for look, feel and performance. [edit]Disadvantages of prototyping Using, or perhaps misusing, prototyping can also have disadvantages. Insufficient analysis: The focus on a limited prototype can distract developers from properly analyzing the complete project. This can lead to overlooking better solutions, preparation of incomplete specifications or the conversion of limited prototypes into poorly engineered final projects that are hard to maintain. Further, since a prototype is limited in functionality it may not scale well if the prototype is used as the basis of a final deliverable, which may not be noticed if developers are too focused on building a prototype as a model. User confusion of prototype and finished system: Users can begin to think that a prototype, intended to be thrown away, is actually a final system that merely needs to be finished or polished. (They are, for example, often unaware of the effort needed to add error-checking and security features which a prototype may not have. ) This can lead them to expect the prototype to accurately model the performance of the final system when this is not the intent of the developers. Users can also become attached to features that were included in a prototype for consideration and then removed from the specification for a final system. If users are able to require all proposed features be included in the final system this can lead to conflict. Developer misunderstanding of user objectives: Developers may assume that users share their objectives (e. g. to deliver core functionality on time and within budget), without understanding wider commercial issues. For example, user representatives attendingEnterprise software (e. g. PeopleSoft) events may have seen demonstrations of transaction auditing (where changes are logged and displayed in a difference grid view) without being told that this feature demands additional coding and often requires more hardware to handle extra database accesses. Users might believe they can demand auditing on every field, whereas developers might think this isfeature creep because they have made assumptions about the extent of user requirements. If the developer has committed delivery before the user requirements were reviewed, developers are between a rock and a hard place, particularly if user management derives some advantage from their failure to implement requirements. Developer attachment to prototype: Developers can also become attached to prototypes they have spent a great deal of effort producing; this can lead to problems like attempting to convert a limited prototype into a final system when it does not have an appropriate underlying architecture. (This may suggest that throwaway prototyping, rather than evolutionary prototyping, should be used. ) Excessive development time of the prototype: A key property to prototyping is the fact that it is supposed to be done quickly. If the developers lose sight of this fact, they very well may try to develop a prototype that is too complex. When the prototype is thrown away the precisely developed requirements that it provides may not yield a sufficient increase in productivity to make up for the time spent developing the prototype. Users can become stuck in debates over details of the prototype, holding up the development team and delaying the final product. Expense of implementing prototyping: the start up costs for building a development team focused on prototyping may be high. Many companies have development methodologies in place, and changing them can mean retraining, retooling, or both. Many companies tend to just jump into the prototyping without bothering to retrain their workers as much as they should. A common problem with adopting prototyping technology is high expectations for productivity with insufficient effort behind the learning curve. In addition to training for the use of a prototyping technique, there is an often overlooked need for developing corporate and project specific underlying structure to support the technology. When this underlying structure is omitted, lower productivity can often result. [13] [edit]Best projects to use prototyping It has been argued that prototyping, in some form or another, should be used all the time. However, prototyping is most beneficial in systems that will have many interactions with the users. It has been found that prototyping is very effective in the analysis and design of on-line systems, especially for transaction processing, where the use of screen dialogs is much more in evidence. The greater the interaction between the computer and the user, the greater the benefit is that can be obtained from building a quick system and letting the user play with it. [7] Systems with little user interaction, such as batch processing or systems that mostly do calculations, benefit little from prototyping. Sometimes, the coding needed to perform the system functions may be too intensive and the potential gains that prototyping could provide are too small. [7] Prototyping is especially good for designing good human-computer interfaces. One of the most productive uses of rapid prototyping to date has been as a tool for iterative user requirements engineering and human-computer interface design. [8] [edit]Methods There are few formal prototyping methodologies even though most Agile Methods rely heavily upon prototyping techniques. [edit]Dynamic systems development method Dynamic Systems Development Method (DSDM)[18] is a framework for delivering business solutions that relies heavily upon prototyping as a core technique, and is itself ISO 9001 approved. It expands upon most understood definitions of a prototype. According to DSDM the prototype may be a diagram, a business process, or even a system placed into production. DSDM prototypes are intended to be incremental, evolving from simple forms into more comprehensive ones. DSDM prototypes may be throwaway or evolutionary. Evolutionary prototypes may be evolved horizontally (breadth then depth) or vertically (each section is built in detail with additional iterations detailing subsequent sections). Evolutionary prototypes can eventually evolve into final systems. The four categories of prototypes as recommended by DSDM are: Business prototypes – used to design and demonstrates the business processes being automated. Usability prototypes – used to define, refine, and demonstrate user interface design usability, accessibility, look and feel. Performance and capacity prototypes used to define, demonstrate, and predict how systems will perform under peak loads as well as to demonstrate and evaluate other non-functional aspects of the system (transaction rates, data storage volume, response time, etc. ) Capability/technique prototypes – used to develop, demonstrate, and evaluate a design approach or concept. The DSDM lifecycle of a prototype is to: 1. Identify prototype 2. Agree to a plan 3. Create the prototype 4. Review the prototype [edit]Operational prototyping Operational Prototyping was proposed by Alan Davis as a way to integrate throwaway and evolutionary prototyping with conventional system development. It offers the best of both the quick-and-dirty and conventional-development worlds in a sensible manner. Designers develop only well-understood features in building the evolutionary baseline, while using throwaway prototyping to experiment with the poorly understood features. [5] Davis belief is that to try to retrofit quality onto a rapid prototype is not the correct approach when trying to combine the two approaches. His idea is to engage in an evolutionary prototyping methodology and rapidly prototype the features of the system after each evolution. The specific methodology follows these steps: [5] An evolutionary prototype is constructed and made into a baseline using conventional development strategies, specifying and implementing only the requirements that are well understood. Copies of the baseline are sent to multiple customer sites along with a trained prototyper. At each site, the prototyper watches the user at the system. Whenever the user encounters a problem or thinks of a new feature or requirement, the prototyper logs it. This frees the user from having to record the problem, and allows him to continue working. After the user session is over, the prototyper constructs a throwaway prototype on top of the baseline system. The user now uses the new system and evaluates. If the new changes arent effective, the prototyper removes them. If the user likes the changes, the prototyper writes feature-enhancement requests and forwards them to the development team. The development team, with the change requests in hand from all the sites, then produce a new evolutionary prototype using conventional methods. Obviously, a key to this method is to have well trained prototypers available to go to the user sites. The Operational Prototyping methodology has many benefits in systems that are complex and have few known requirements in advance. [edit]Evolutionary systems development Evolutionary Systems Development is a class of methodologies that attempt to formally implement Evolutionary Prototyping. One particular type, called Systemscraft is described by John Crinnion in his book: Evolutionary Systems Development. Systemscraft was designed as a prototype methodology that should be modified and adapted to fit the specific environment in which it was implemented. Systemscraft was not designed as a rigid cookbook approach to the development process. It is now generally recognised[sic] that a good methodology should be flexible enough to be adjustable to suit all kinds of environment and situation†¦[7] The basis of Systemscraft, not unlike Evolutionary Prototyping, is to create a working system from the initial requirements and build upon it in a series of revisions. Systemscraft places heavy emphasis on traditional analysis being used throughout the development of the system. [edit]Evolutionary rapid development Evolutionary Rapid Development (ERD)[12] was developed by the Software Productivity Consortium, a technology development and integration agent for the Information Technology Office of the Defense Advanced Research Projects Agency (DARPA). Fundamental to ERD is the concept of composing software systems based on the reuse of components, the use of software templates and on an architectural template. Continuous evolution of system capabilities in rapid response to changing user needs and technology is highlighted by the evolvable architecture, representing a class of solutions. The process focuses on the use of small artisan-based teams integrating software and systems engineering disciplines working multiple, often parallel short-duration timeboxes with frequent customer interaction. Key to the success of the ERD-based projects is parallel exploratory analysis and development of features, infrastructures, and components with and adoption of leading edge technologies enabling the quick reaction to changes in technologies, the marketplace, or customer requirements. [9] To elicit customer/user input, frequent scheduled and ad hoc/impromptu meetings with the stakeholders are held. Demonstrations of system capabilities are held to solicit feedback before design/implementation decisions are solidified. Frequent releases (e. g. , betas) are made available for use to provide insight into how the system could better support user and customer needs. This assures that the system evolves to satisfy existing user needs. The design framework for the system is based on using existing published or de facto standards. The system is organized to allow for evolving a set of capabilities that includes considerations for performance, capacities, and functionality. The architecture is defined in terms of abstract interfaces that encapsulate the services and their implementation (e. g. , COTS applications). The architecture serves as a template to be used for guiding development of more than a single instance of the system. It allows for multiple application components to be used to implement the services. A core set of functionality not likely to change is also identified and established. The ERD process is structured to use demonstrated functionality rather than paper products as a way for stakeholders to communicate their needs and expectations. Central to this goal of rapid delivery is the use of the timebox method. Timeboxes are fixed periods of time in which specific tasks (e. g. , developing a set of functionality) must be performed. Rather than allowing time to expand to satisfy some vague set of goals, the time is fixed (both in terms of calendar weeks and person-hours) and a set of goals is defined that realistically can be achieved within these constraints. To keep development from degenerating into a random walk, long-range plans are defined to guide the iterations. These plans provide a vision for the overall system and set boundaries (e. g. , constraints) for the project. Each iteration within the process is conducted in the context of these long-range plans. Once an architecture is established, software is integrated and tested on a daily basis. This allows the team to assess progress objectively and identify potential problems quickly. Since small amounts of the system are integrated at one time, diagnosing and removing the defect is rapid. User demonstrations can be held at short notice since the system is generally ready to exercise at all times. [edit]Scrum Scrum is an agile method for project management. The approach was first described by Takeuchi and Nonaka in The New New Product Development Game (Harvard Business Review, Jan-Feb 1986) [edit]Tools Efficiently using prototyping requires that an organization have proper tools and a staff trained to use those tools. Tools used in prototyping can vary from individual tools like 4th generation programming languages used for rapid prototyping to complex integratedCASE tools. 4th generation visual programming languages like Visual Basic and ColdFusion are frequently used since they are cheap, well known and relatively easy and fast to use. CASE tools, supporting requirements analysis, like the Requirements Engineering Environment (see below) are often developed or selected by the military or large organizations. Object oriented tools are also being developed like LYMB from the GE Research and Development Center. Users may prototype elements of an application themselves in aspreadsheet. [edit]Screen generators, design tools Software Factories Also commonly used are screen generating programs that enable prototypers to show users systems that dont function, but show what the screens may look like. [4] Developing Human Computer Interfaces can sometimes be the critical part of the development effort, since to the users the interface essentially is the system. Software Factories are Code Generators that allow you to model the domain model and then drag and drop the UI. Also they enable you to run the prototype and use basic database functionality. This approach allows you to explore the domain model and make sure it is in sync with the GUI prototype. Also you can use the UI Controls that will later on be used for real development. [edit]Application definition or simulation software A new class of software called also Application definition or simulation software enable users to rapidly build lightweight, animatedsimulations of another computer program, without writing code. Application simulation software allows both technical and non-technical users to experience, test, collaborate and validate the simulated program, and provides reports such as annotations, screenshot andschematics. As a solution specification technique, Application Simulation falls between low-risk, but limited, text or drawing-basedmock-ups (or wireframes) sometimes called paper based prototyping, and time-consuming, high-risk code-based prototypes, allowing software professionals to validate requirements and design choices early on, before development begins. In doing so, risks and costs associated with software implementations can be dramatically reduced. [5] To simulate applications one can also use software which simulate real-world software programs for computer based training, demonstration, and customer support, such as screencasting software as those areas are closely related. There are also more specialised tools. [6][7][8] Some of the leading tools in this category are Axure, Fluid UI, DefineIT from Borland, iRise, MockupTigerJustinmind Prototyper, LucidChart and ProtoShare. [9][10][11] [edit]Requirements Engineering Environment The Requirements Engineering Environment (REE), under development at Rome Laboratory since 1985, provides an integrated toolset for rapidly representing, building, and executing models of critical aspects of complex systems. [15] Requirements Engineering Environment is currently used by the Air Force to develop systems. It is: an integrated set of tools that allows systems analysts to rapidly build functional, user interface, and performance prototype models of system components. These modeling activities are performed to gain a greater understanding of complex systems and lessen the impact that inaccurate requirement specifications have on cost and scheduling during the system development process. Models can be constructed easily, and at varying levels of abstraction or granularity, depending on the specific behavioral aspects of the model being exercised. [15] REE is composed of three parts. The first, called proto is a CASE tool specifically designed to support rapid prototyping. The second part is called the Rapid Interface Prototyping System or RIP, which is a collection of tools that facilitate the creation of user interfaces. The third part of REE is a user interface to RIP and proto that is graphical and intended to be easy to use. Rome Laboratory, the developer of REE, intended that to support their internal requirements gathering methodology. Their method has three main parts: Elicitation from various sources which means u loose (users, interfaces to other systems), specification, and consistency checking Analysis that the needs of diverse users taken together do not conflict and are technically and economically feasible Validation that requirements so derived are an accurate reflection of user needs. [15] In 1996, Rome Labs contracted Software Productivity Solutions (SPS) to further enhance REE to create a commercial quality REE that supports requirements specification, simulation, user interface prototyping, mapping of requirements to hardware architectures, and code generation†¦[16] This system is named the Advanced Requirements Engineering Workstation or AREW. [edit]LYMB LYMB[17] is an object-oriented development environment aimed at developing applications that require combining graphics-based user interfaces, visualization, and rapid prototyping. [edit]Non-relational environments Non-relational definition of data (e. g. using Cache or associative models) can help make end-user prototyping more productive by delaying or avoiding the need to normalize data at every iteration of a simulation. This may yield earlier/greater clarity of business requirements, though it does not specifically confirm that requirements are technically and economically feasible in the target production system. [edit]PSDL PSDL is a prototype description language to describe real-time software. [12] The associated tool set is CAPS (Computer Aided Prototyping System). [13] Prototyping software systems with hard real-time requirements is challenging because timing constraints introduce implementation and hardware dependencies. PSDL addresses these issues by introducing control abstractions that include declarative timing constraints. CAPS uses this information to automatically generate code and associated real-time schedules, monitor timing constraints during prototype execution, and simulate execution in proportional real time relative to a set of parameterized hardware models. It also provides default assumptions that enable execution of incomplete prototype descriptions, integrates prototype construction with a software reuse repository for rapidly realizing efficient implementations, and provides support for rapid evolution of requirements and designs. [14] Batch processing From Wikipedia, the free encyclopedia Batch processing is execution of a series of programs (jobs) on a computer without manual intervention. Jobs are set up so they can be run to completion without manual intervention, so all input data are preselected through scripts,command-line parameters, or job control language. This is in contrast to online or interactive programs which prompt the user for such input. A program takes a set of data files as input, processes the data, and produces a set of output data files. This operating environment is termed as batch processing because the input data are collected into batches of files and are processed in batches by the program. Contents [hide] 1 Benefits 2 History 3 Modern systems 4 Batch Performance Problem/Solution 5 Common batch processing usage 5. 1 Data processing 5. 2 Databases 5. 3 Images 5. 4 Converting 6 Notable batch scheduling and execution environments 7 See also 8 External links [edit]Benefits Batch processing has these benefits: It can shift the time of job processing to when the computing resources are less busy. It avoids idling the computing resources with minute-by-minute manual intervention and supervision. By keeping high overall rate of utilization, it amortizes the computer, especially an expensive one. It allows the system to use different priorities for batch and interactive work. [edit]History Batch processing has been associated with mainframe computers since the earliest days of electronic computing in the 1950s. There were a variety of reasons why batch processing dominated early computing. One reason is that the most urgent business problems for reasons of profitability and competitiveness were primarily accounting problems, such as billing. Billing is inherently a batch-oriented business process, and practically every business must bill, reliably and on-time. Also, every computing resource was expensive, so sequential submission of batch jobs on punched cards matched the resource constraints and technology evolution at the time. Later,interactive sessions with either text-based computer terminal interfaces or graphical user interfaces became more common. However, computers initially were not even capable of having multiple programs loaded into the main memory. Batch processing is still pervasive in mainframe computing, but practically all types of computers are now capable of at least some batch processing, even if only for housekeeping tasks. That includes UNIX-based computers, Microsoft Windows, Mac OS X (whose foundation is the BSD Unix kernel), and even smartphones, increasingly. Virus scanning is a form of batch processing, and so are scheduled jobs that periodically delete temporary files that are no longer required. E-mail systems frequently have batch jobs that periodically archive and compress old messages. As computing in general becomes more pervasive in society and in the world, so too will batch processing. [edit]Modern systems Despite their long history, batch applications are still critical in most organizations in large part because many core business processes are inherently batch-oriented. (Billing is a notable example that nearly every business requires to function. ) While online systems can also function when manual intervention is not desired, they are not typically optimized to perform high-volume, repetitive tasks. Therefore, even new systems usually contain one or more batch applications for updating information at the end of the day, generating reports, printing documents, and other non-interactive tasks that must complete reliably within certain business deadlines. Modern batch applications make use of modern batch frameworks such as Spring Batch, which is written for Java, and other frameworks for other programming languages, to provide the fault tolerance and scalability required for high-volume processing. In order to ensure high-speed processing, batch applications are often integrated with grid computing solutions to partition a batch job over a large number of processors, although there are significant programming challenges in doing so. High volume batch processing places particularly heavy demands on system and application architectures as well. Architectures that feature strong input/output performance and vertical scalability, including modern mainframe computers, tend to provide better batch performance than alternatives. Scripting languages became popular as they evolved along with batch processing. [edit]Batch Performance Problem/Solution Even with advances in batch program development, problems with batch performance are still very common. This is particularly painful during implementations. Operations teams need to ensure that the batch window is not breached to guarantee their Service-Level Agreement (SLA) is met. Breaches in SLA can result in significant financial loss to the business. Best practice[citation needed] is for all batch processing to be completed in under half the batch window. It is not usually possible to achieve this using performance tuning alone. These solutions are the new industry standard as no additional development or hardware expenditure is required to make them work. Runtimes can usually be reduced by more than 85%. [citation needed] [edit]Common batch processing usage [edit]Data processing A typical batch processing schedule includes end of day- reporting (EOD). Historically, many systems had a batch window where online subsystems were turned off and the system capacity was used to run jobs common to all data (accounts, users, or customers) on a system. In a bank, for example, EOD jobs include interest calculation, generation of reports and data sets to other systems, printing (statements), and payment processing. Many businesses have moved to concurrent online and batch architectures in order to supportglobalization, the Internet, and other relatively newer business demands. Such architectures place unique stresses on system design, programming techniques, availability engineering, and IT service delivery. [edit]Databases Batch processing is also used for efficient bulk database updates and automated transaction processing, as contrasted to interactiveonline transaction processing (OLTP) applications. The extract, transform, load (ETL) step in populating data warehouses is inherently a batch process in most implementations. [edit]Images Batch processing is often used to perform various operations with digital images. Computer programs exist that let one resize, convert, watermark, or otherwise edit image files. [edit]Converting Batch processing is also used for converting a number of computer files from one format to another. This is to make files portable and versatile especially for proprietary and legacy files where viewers are not easy to come by. [edit]Notable batch scheduling and execution environments UNIX utilizes cron and at facilities to allow for scheduling of complex job scripts. Windows has a job scheduler. Most high-performance computing clusters use batch processing to maximize cluster usage. The IBM mainframe z/OS operating system / platform has arguably the most highly refined and evolved set of batch processing facilities owing to its origins, long history, and continuing evolution, and today such systems commonly support hundreds or even thousands of concurrent online and batch tasks within a single operating system image. Mainframe-unique technologies that aid concurrent batch and online processing include Job Control Language (JCL), scripting languages such as REXX, Job Entry Subsystem (JES2 and JES3),Workload Manager (WLM), Automatic Restart Manager (ARM), Resource Recovery Services (RRS), DB2 data sharing, Parallel Sysplex, unique performance optimizations such as HiperDispatch, I/O channel architecture, and several others. Human–computer interaction From Wikipedia, the free encyclopedia (Redirected from Human-computer interface) Human use of computers is a major focus of the field of HCI Human–computer Interaction (HCI) involves the study, planning, and design of the interaction between people (users) and computers. It is often regarded as the intersection ofcomputer science, behavioral sciences, design and several other fields of study. The term was popularized by Card, Moran, and Newell in their seminal 1983 book, The Psychology of Human-Computer Interaction, although the authors first used the term in 1980[1], and the first known use was in 1975[2]. The term connotes that, unlike other tools with only limited uses (such as a hammer, useful for driving nails, but not much else), a computer has many affordances for use and this takes place in an open-ended dialog between the user and the computer. Because human–computer interaction studies a human and a machine in conjunction, it draws from supporting knowledge on both the machine and the human side. On the machine side, techniques in computer graphics, operating systems, programming languages, and development environments are relevant. On the human side, communication theory, graphic and industrial design disciplines, linguistics, social sciences, cognitive psychology, and human factors such as computer user satisfaction are relevant. Engineering and design methods are also relevant. Due to the multidisciplinary nature of HCI, people with different backgrounds contribute to its success. HCI is also sometimes referred to as man–machine interaction (MMI) or computer–human interaction (CHI). Attention to human-machine interaction is important, because poorly designed human-machine interfaces can lead to many unexpected problems. A classic example of this is the Three Mile Island accident where investigations concluded that the design of the human–machine interface was at least partially responsible for the disaster. [3][4][5] Similarly, accidents in aviation have resulted from manufacturers decisions to use non-standard flight instrument and/or throttle quadrant layouts: even though the new designs were proposed to be superior in regards to basic human–machine interaction, pilots had already ingrained the standard layout and thus the conceptually good idea actually had undesirable results. Contents [hide] 1 Goals 2 Differences with related fields 3 Design principles 4 Design methodologies 5 Display designs 5. 1 Thirteen principles of display design 5. 1. 1 Perceptual principles 5. 1. 2 Mental model principles 5. 1. 3 Principles based on attention 5. 1. 4 Memory principles 6 Human–computer interface 7 Current research 7. 1 User customization 7. 2 Embedded computation 7. 3 Augmented reality 8 Factors of change 9 Academic conferences 9. 1 Special purpose 10 See also 11 Footnotes 12 Further reading 13 External links [edit]Goals A basic goal of HCI is to improve the interactions between users and computers by making computers more usable and receptive to the users needs. Specifically, HCI is concerned with: methodologies and processes for designing interfaces (i. e. , given a task and a class of users, design the best possible interface within given constraints, optimizing for a desired property such as learnability or efficiency of use) methods for implementing interfaces (e. g. software toolkits and libraries; efficient algorithms) techniques for evaluating and comparing interfaces developing new interfaces and interaction techniques developing descriptive and predictive models and theories of interaction A long term goal of HCI is to design systems that minimize the barrier between the humans cognitive model of what they want to accomplish and the computers understanding of the users task. Professional practitioners in HCI are usually designers concerned with the practical application of design methodologies to real-world problems. Their work often revolves around designing graphical user interfaces and web interfaces. Researchers in HCI are interested in developing new design methodologies, experimenting with new hardware devices, prototyping new software systems, exploring new paradigms for interaction, and developing models and theories of interaction. [edit]Differences with related fields HCI differs from human factors (or ergonomics) in that with HCI the focus is more on users working specifically with computers, rather than other kinds of machines or designed artifacts. There is also a focus in HCI on how to implement the computer software and hardware mechanisms to support human–computer interaction. Thus, human factors is a broader term; HCI could be described as the human factors of computers – although some experts try to differentiate these areas. HCI also differs from human factors in that there is less of a focus on repetitive work-oriented tasks and procedures, and much less emphasis on physical stress and the physical form or industrial design of the user interface, such as keyboards and mouse devices. Three areas of study have substantial overlap with HCI even as the focus of inquiry shifts. In the study of personal information management (PIM), human interactions with the computer are placed in a larger informational context – people may work with many forms of information, some computer-based, many not (e. g. , whiteboards, notebooks, sticky notes, refrigerator magnets) in order to understand and effect desired changes in their world. In computer-supported cooperative work (CSCW), emphasis is placed on the use of computing systems in support of the collaborative work of a group of people. The principles of human interaction management (HIM) extend the scope of CSCW to an organizational level and can be implemented without use of computer systems. [edit]Design principles When evaluating a current user interface, or designing a new user interface, it is important to keep in mind the following experimental design principles: Early focus on user(s) and task(s): Establish how many users are needed to perform the task(s) and determine who the appropriate users should be; someone who has never used the interface, and will not use the interface in the future, is most likely not a valid user. In addition, define the task(s) the users will be performing and how often the task(s) need to be performed. Empirical measurement: Test the interface early on with real users who come in contact with the interface on an everyday basis. Keep in mind that results may vary with the performance level of the user and may not be an accurate depiction of the typical human-computer interaction. Establish quantitative usability specifics such as: the number of users performing the task(s), the time to complete the task(s), and the number of errors made during the task(s). Iterative design: After determining the users, tasks, and empirical measurements to include, perform the following iterative design steps: 1. Design the user interface 2. Test 3. Analyze results 4. Repeat Repeat the iterative design process until a sensible, user-friendly interface is created. [6] [edit]Design methodologies A number of diverse methodologies outlining techniques for human–computer interaction design have emerged since the rise of the field in the 1980s. Most design methodologies stem from a model for how users, designers, and technical systems interact. Early methodologies, for example, treated users cognitive processes as predictable and quantifiable and encouraged design practitioners to look to cognitive science results in areas such as memory and attention when designing user interfaces. Modern models tend to focus on a constant feedback and conversation between users, designers, and engineers and push for technical systems to be wrapped around the types of experiences users want to have, rather than wrapping user experience around a completed system. Activity theory is used in HCI to define and study the context in which human interactions with computers take place. Activity theory provides a framework to reason about actions in these contexts, analytical tools with the format of checklists of items that researchers should consider, and informs design of interactions from an activity-centric perspective. [7] User-centered design: user-centered design (UCD) is a modern, widely practiced design philosophy rooted in the idea that users must take center-stage in the design of any computer system. Users, designers and technical practitioners work together to articulate the wants, needs and limitations of the user and create a system that addresses these elements. Often, user-centered design projects are informed by ethnographic studies of the environments in which users will be interacting with the system. This practice is similar but not identical to Participatory Design, which emphasizes the possibility for end-users to contribute actively through shared design sessions and workshops. Principles of user interface design: these are seven principles that may be considered at any time during the design of a user interface in any order: tolerance, simplicity, visibility, affordance, consistency, structure and feedback. [8] See the list of interface design methods for more [edit]Display designs Displays are human-made artifacts designed to support the perception of relevant system variables and to facilitate further processing of that information. Before a display is designed, the task that the display is intended to support must be defined (e. g. navigating, controlling, decision making, learning, entertaining, etc. ). A user or operator must be able to process whatever information that a system generates and displays; therefore, the information must be displayed according to principles in a manner that will support perception, situation awareness, and understanding. [edit]Thirteen principles of display design Christopher Wickens et al. defined 13 principles of display design in their book An Introduction to Human Factors Engineering. [9] These principles of human perception and information processing can be utilized to create an effective display design. A reduction in errors, a reduction in required training time, an increase in efficiency, and an increase in user satisfaction are a few of the many potential benefits that can be achieved through utilization of these principles. Certain principles may not be applicable to different displays or situations. Some principles may seem to be conflicting, and there is no simple solution to say that one principle is more important than another. The principles may be tailored to a specific design or situation. Striking a functional balance among the principles is critical for an effective design. [10] [edit]Perceptual principles 1. Make displays legible (or audible). A display’s legibility is critical and necessary for designing a usable display. If the characters or objects being displayed cannot be discernible, then the operator cannot effectively make use of them. 2. Avoid absolute judgment limits. Do not ask the user to determine the level of a variable on the basis of a single sensory variable (e. g. color, size, loudness). These sensory variables can contain many possible levels. 3. Top-down processing. Signals are likely perceived and interpreted in accordance with what is expected based on a user’s past experience. If a signal is presented contrary to the user’s expectation, more physical evidence of that signal may need to be presented to assure that it is understood correctly. 4. Redundancy gain. If a signal is presented more than once, it is more likely that it will be understood correctly. This can be done by presenting the signal in alternative physical forms (e. g. color and shape, voice and print, etc. ), as redundancy does not imply repetition. A traffic light is a good example of redundancy, as color and position are redundant. 5. Similarity causes confusion: Use discriminable elements. Signals that appear to be similar will likely be confused. The ratio of similar features to different features causes signals to be similar. For example, A423B9 is more similar to A423B8 than 92 is to 93. Unnecessary similar features should be removed and dissimilar features should be highlighted. [edit]Mental model principles 6. Principle of pictorial realism. A display should look like the variable that it represents (e. g. high temperature on a thermometer shown as a higher vertical level). If there are multiple elements, they can be configured in a manner that looks like it would in the represented environment. 7. Principle of the moving part. Moving elements should move in a pattern and direction compatible with the user’s mental model of how it actually moves in the system. For example, the moving element on an altimeter should move upward with increasing altitude. [edit]Principles based on attention 8. Minimizing information access cost. When the user’s attention is diverted from one location to another to access necessary information, there is an associated cost in time or effort. A display design should minimize this cost by allowing for frequently accessed sources to be located at the nearest possible position. However, adequate legibility should not be sacrificed to reduce this cost. 9. Proximity compatibility principle. Divided attention between two information sources may be necessary for the completion of one task. These sources must be mentally integrated and are defined to have close mental proximity. Information access costs should be low, which can be achieved in many ways (e. g. proximity, linkage by common colors, patterns, shapes, etc. ). However, close display proximity can be harmful by causing too much clutter. 10. Principle of multiple resources. A user can more easily process information across different resources. For example, visual and auditory information can be presented simultaneously rather than presenting all visual or all auditory information. [edit]Memory principles 11. Replace memory with visual information: knowledge in the world. A user should not need to retain important information solely in working memory or retrieve it from long-term memory. A menu, checklist, or another display can aid the user by easing the use of their memory. However, the use of memory may sometimes benefit the user by eliminating the need to reference some type of knowledge in the world (e. g. an expert computer operator would rather use direct commands from memory than refer to a manual). The use of knowledge in a user’s head and knowledge in the world must be balanced for an effective design. 12. Principle of predictive aiding. Proactive actions are usually more effective than reactive actions. A display should attempt to eliminate resource-demanding cognitive tasks and replace them with simpler perceptual tasks to reduce the use of the user’s mental resources. This will allow the user to not only focus on current conditions, but also think about possible future conditions. An example of a predictive aid is a road sign displaying the distance from a certain destination. 13. Principle of consistency. Old habits from other displays will easily transfer to support processing of new displays if they are designed in a consistent manner. A user’s long-term memory will trigger actions that are expected to be appropriate. A design must accept this fact and utilize consistency among different displays.