Wednesday, October 30, 2019

Specification for the Director Case Study Example | Topics and Well Written Essays - 250 words - 123

Specification for the Director - Case Study Example With a 4.0 Dmax, it offers exceptional image quality, excellent detail in shadow areas, and remarkable color range. With its dual lens system, this innovative product automatically selects from two lenses for desired scan resolution. In addition, its Digital ICE technology allows you to remove dust, scratches, and many other kinds of surface flaws from the original image, reducing the need for retouching. And, with the convenient film holder included, it's easy to batch-scan slides and negatives for increased productivity. The Epson Perfection V700 Photo is a groundbreaking flatbed scanner delivering the highest optical scan resolution available (6400 dpi) for photographic applications. This powerful scanner gives you the industry's leading technologies, right at your fingertips. Optimize every scan with the exclusive Epsonâ„ ¢ Dual Lensâ„ ¢ system. Get professional-quality scans. Enjoy exceptional tonal range and fine shadow detail. Remove dust, scratches, and many other kinds of surface flaws from the original image. Scan large batches of 35 mm slides, negatives, and medium-format film.

Sunday, October 27, 2019

Introduction to DNA and Genetics

Introduction to DNA and Genetics Genetics is the study of the qualities that are inherited and transmitted to the offspring from the parents during reproduction. Parents pass traits to their offspring, making the basis of heredity. The inherited traits are coded for in genes, which are the inherited elements. Genes are the functional units of DNA. Through genetics, biologists and other scientists get to understand the processes and principles of heredity, genetic variation and genes. Genetics is of great interest in the contemporary society. With the completion of the Human Genome Project, a lot of information about genetics has been exposed that is essential to the understanding of the human health. Understanding of the genome and mutations that affect it has led to an explanation of genetically inherited traits. These characteristics were previously mysterious and could not be contained as drugs could not be developed against them. The information gained from genetics has aided the research into these diseases. It has also opened a window to the possibility of finding a cure for the deadly cancer diseases (Sack, 2008). The building block of all genetic concepts is DNA. DNA is one of the nucleic acids which stores hereditary or genetic information. It is found in the nucleus in eukaryotic organisms and the cytoplasm in the prokaryotic organisms. DNA is composed of nucleotides, which are composed of sugar, a phosphate group and a base. The DNA sugar is deoxyribose sugar. There are four essential bases in the structure of DNA. From the four bases, the numerous different sequences arise which lead to the differences observed among organisms. The bases are Thymine, Adenine, Guanine and Cytosine. The nucleotides bond to each other by a phosphor-diester bond, leading to a ladder-like double helix structure. The double helix structure is formed by the pairing of anti-parallel strands of DNA. The nucleotide bases from the two strands bind to each other through hydrogen bonds. Thymine pairs with Adenine and Guanine pairs with Cytosine (Hawley, 2010). The DNA is divided into functional regions called genes. Genes have varying sizes, from a few hundred DNA bases to more than two million bases. The genes code for the traits expressed by each individual. Each gene has specific sets of instructions that code for particular proteins or protein functions. The nucleotide sequence of each gene forms the genetic sequence, which is crucial in the central dogma. The central dogma explains that the phenotypic appearance of an individual is a characteristic of their DNA. From the DNA sequence, RNA (another nucleotide) can transcribe and translate the information into proteins. The proteins are the building structure of the body. Therefore, the protein that is expressed in the DNA is the one that will be manufactured and expressed physically as the structure of the individual (Hawley 2010, Sack, 2008). Genes are found in packaged DNA sets called chromosomes. Each chromosome has millions of DNA bases, from fifty to two hundred million in number. Many genes make a chromosome. The chromosomes are the means of transferring genetic information from the parents to the offspring. They are more complex as they are made up of the genes and binding proteins know as histones. The DNA is usually tightly wound around each of the proteins. Each organism has a specific number of chromosomes, which if exceeded or reduced result in an abnormality. The human beings have twenty-three pairs of chromosomes (46 chromosomes). One of the pairs is the sex chromosomes (XX for female and XY for male). The other twenty-two pairs of chromosomes are autosomes (Hawley 2010, Sack 2008). Traits are inherited from both parents following the Mendelian inheritance laws. The genetic makeup of an individual (genotype) is made of two alleles of each gene. An allele is a copy of a gene that codes for the same trait. Each allele is inherited from either parent. If the alleles are identical, they are called homozygous, and if they are not identical, they are heterozygous. If alleles are indeed identical, they have similar coding sequences at that particular locus. Each gene has a dominant allele that will be expressed in a case of the presence of a heterozygous pair of the alleles (Jobling, Hurles, Tyler-Smith, 2013). The genetic material of the human cells (and other eukaryotic cells) except the red blood cells is found in the nucleus. The RBC do not have a nucleus and hence do not carry genetic information. However, some organelles such as the mitochondria and the chloroplasts have their DNA. The organelles contain multiple copies of small chromosomes and are only inherited from the mother. They are found in the ovum during fertilization as the sperm cell only contributes the nuclear genetic information. The exact location of a gene on a chromosome is called a locus. There are fifty thousand to hundred thousands of genes in the human genome. However, the DNA in the genes is only approximately 2% of the total genomic DNA. Much of the information on the non-coding DNA has not yet unearthed, despite successfully completing the human genome project (Hawley, 2010). Formerly called junk DNA, the non-coding DNA is increasingly being considered to be of essential function in the central dogma. Scientists are, however, working to find out the exact role of the DNA, which has so far remained elusive. The locus of each gene has enabled the formation of the genetic map, as more than 13,000 genetic sites have been correctly identified (Sack, 2008). The genetic maps have enabled the study of different inherited diseases. The particular location of the gene(s) responsible for the conditions can be identified and studied. The development of the gene maps makes it faster, cheaper and more practical for scientists to identify and diagnose a given genetic disease. Genetic mapping has made it possible to identify most hereditary diseases such as cystic fibrosis, enabling adequate pharmaceutical research to be carried out on them. Hereditary diseases occur as a result of a mutation in the genes that code for given proteins. The mutation can either occur through deletions, substitutions or insertions of DNA bases at certain points leading to frameshifts in the structure of the gene. These frameshifts will result in coding for abnormal or non-functioning proteins (Loewe, 2008). Without these proteins to act in their usual roles, the body faces challenges adapting to situations that require the proteins. For example, the Duchenne Muscular Dystrophy is as a result of a deletion in a gene that codes for an important muscle protein, dystrophin. The absence of dystrophin results to muscle weakness and inevitable early death (Behrman, Kliegman, Jenson, 2011). However, some mutations are as a result of adaptation. For example, the mutation to cause sickle red blood cells was an adaptation to prevent malaria in the tropics. Genetics has not only enabled us to understand more about ourselves, but has also given us more information about our origins. Evolutionary genetics allows the comparison of genetic data of proteins from different organisms and establishing where they diverged or converged in the evolutionary tree. With the development of technology and computers, the branch of bioinformatics has explored worlds unknown before. Accurate data on genetic sequences has been compared with previous generations to establish the relationships between human beings and other organisms (Jobling, Hurles, Tyler-Smith, 2013). With the understanding of the past and evolution, genetics helps us to predict the future. The future of genetics is one of the exciting branches of science that has fascinated many biologists. Genetics is not without its shortcomings. The cloning debates and other ethical issues have brought the in-depth study and application of genetics into questions. The knowledge of Genetics might tempt scientists to try to act God in establishing human beings without blemish. The cloning of human being is also a much-debated question as well as the issues of personalized medicine. The study of the genetics of an individual also means that one gets to understand the genetics of the parents, raising ethical questions about the informed consent. Ethical dilemmas also arise when two carrier parents are expecting a child who has been diagnosed with the disease. Do they end the pregnancy or wait for the baby to be born and suffer? In some instances, genetics has led to more questions than answers (Fulda, Lykens, 2006). Despite its shortcoming, genetics opened multiple doors in the contemporary science. References: Behrman, R. E., Kliegman, R., Jenson, H. B. (2011). Nelson textbook of pediatrics. Philadelphia: W.B. Saunders Co. Fulda, K. G., Lykens, K. (2006). Ethical issues in predictive genetic testing: a public health perspective. Journal of medical ethics, 32(3), 143-147. Hawley, R. S. (2010). HUMAN GENOME. Academic Press. Jobling, M., Hurles, M., Tyler-Smith, C. (2013). Human evolutionary genetics: origins, peoples disease. Garland Science. Sack, G. H. (2008). Genetics. New York: McGraw-Hill Medical.

Friday, October 25, 2019

Management and Leadership Essay -- essays research papers

Leadership and Management are two notions that are often used interchangeably. However, these words actually describe two different concepts. For this paper, I am going to try to discuss these differences and explain why both terms are thought to be similar. Leadership is just one of the many assets a successful manager must possess. Care must be taken in distinguishing between the two concepts. The main aim of a manager is to maximize the output of the organization through administrative implementation. To achieve this, managers must undertake the following functions:  ·Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Organization  ·Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Planning  ·Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Staffing  ·Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Directing  ·Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Controlling Leadership is just one important component of the directing function. A manager cannot just be a leader, he also needs formal authority to be effective. â€Å"For any quality initiative to take hold, senior management must be involved and act as a role model. This involvement cannot be delegated†(Predpall, 30).   Ã‚  Ã‚  Ã‚  Ã‚  In some circumstances, leadership is not required. For example, self-motivated groups may not require a single leader and may find leaders dominating. The fact that a leader is not always required proves that leadership is just an asset and is not essential.   Ã‚  Ã‚  Ã‚  Ã‚  Managers think incrementally, while leaders think radically. â€Å"Managers do things right,... Management and Leadership Essay -- essays research papers Leadership and Management are two notions that are often used interchangeably. However, these words actually describe two different concepts. For this paper, I am going to try to discuss these differences and explain why both terms are thought to be similar. Leadership is just one of the many assets a successful manager must possess. Care must be taken in distinguishing between the two concepts. The main aim of a manager is to maximize the output of the organization through administrative implementation. To achieve this, managers must undertake the following functions:  ·Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Organization  ·Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Planning  ·Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Staffing  ·Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Directing  ·Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Controlling Leadership is just one important component of the directing function. A manager cannot just be a leader, he also needs formal authority to be effective. â€Å"For any quality initiative to take hold, senior management must be involved and act as a role model. This involvement cannot be delegated†(Predpall, 30).   Ã‚  Ã‚  Ã‚  Ã‚  In some circumstances, leadership is not required. For example, self-motivated groups may not require a single leader and may find leaders dominating. The fact that a leader is not always required proves that leadership is just an asset and is not essential.   Ã‚  Ã‚  Ã‚  Ã‚  Managers think incrementally, while leaders think radically. â€Å"Managers do things right,...

Thursday, October 24, 2019

India & Mexico: the two stories Essay

By the end of 20th century, the world had realized that the next century is going to be driven by developing nations from South America, Central America and Asia on economic ground. The role of the economically developed nation will get reduced to that of investor and consumer while the developing nations will be converted into producers with foreign direct investment will bring capital and technology for that production. Looking into Asia, the nations which is supporting the above mentioned view are neither Japan nor the South East Asian tigers comprising ASEAN; but the world’s two most populated nations namely China and India. Many economists called this advent of the two nations as the arrival of Chindia. The China has now become the factory of the world while India is a service sector giant while leading in the sectors like software development and BPO industry (Perkovich, 2003). And in case of Central America and South America, the countries which are expected to match the growth of other developing nations are Brazil, Argentina and Mexico. These nations have a very different past if things like political stability and economic policies are taken into consideration. India and its late rise   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   Though both China and India have now become a very successful case of FDI channeled development model but the structure of economy of these two nations are at the two ends of any of the economic theory. In China, being a one party communist state and very strong central government economic decisions are taken irrespective of what is actually the people at ground wish while in India, being a secular democratic nation with multi-party political system decisions related to economy are often taken while considering the compulsions like electoral promises and is very much populist in nature.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   The governments which includes both central and states are always under pressure from both opposition parties and popular public demand and many a times the decisions get affected due to this factor. At the same time, India’s economic stand for more than forty years of its independence had been protective and least connected with the world (Bromley, Mackintosh, Brown &   Wuyts, 2004, p. 196). Its neutral stand during the cold war and strategic military relationship with USSR caused very less interaction with western world led by USA. The country continued to pursue its independent political stand and entered 21st century, its economic structure saw extreme changes and the country now boasts of having the USA as its largest trading partner and at the same time, US has also found great interest in world’s largest democracy and the recently signed nuclear treaty between the US president George Bush and Indian Prime Minister Dr. Manmohan Singh clearly underlines the growing interest between the two.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   So the new India or better to say the liberalized India post reforms presents a beautiful case where Kenneth Waltz’s theory of International Relations which states that the action of a state can often get affected due to pressures being exerted by international forces and thereby limiting the options available to them(1979). The neorealist or structured model has been developed with the aim to explain the repeating patterns of state behavior and power and its extent which is the combination of its capacity to resist external influence while influencing others to behave according to its wishes.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   The liberalization process in India began in early 1990s in the tenure of the Prime Minister P. V. Narashimha Rao under the leadership of then Finance Minister Dr. Manamohan Singh. The reform process and India’s integration into world economy was widely appreciated with International Monetary Fund or the IMF calling it a long term corrective measure. The reform process which began with India signing GATT and becoming a part of WTO was widely appreciated by almost all economic quarters (Bromley et. al, 2004, p. 173). The Narashimha Rao govt. continued with his reforms though slowly despite stiff resistance from major opposition parties by destroying the opposition unity (Bromley et. al, 2004, p. 167).   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   After entering into a new economic fold, the Indian State’s decision showed the signs of getting influenced through external international forces which includes IMF, World Bank and other trade partners including US and EU. On economic issues, the Indian government for obtaining loans from IMF and World Bank had to observe their demands. Some of the demands that IMF made were import liberalization, tariff reduction, decontrolling the food grains market, decreasing subsidies in food and agricultural sector, PSU privatizations, enabling law for attracting FDI in manufacturing and infrastructure projects and opening the domestic banking and insurance sector i.e., financial liberalization (Bromley et. al, 2004, p. 199).. The Indian government reacted cautiously but in a considerably long period, opened some of the sectors with foreign players holding majority stakes while in most of the sectors FDI was promoted to some percentage that may be 26 percent or up to 49 percent (Govt. of India, 2005).   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   The economic reforms of Indian economy went into super fast mode during the regime of new political party. The BJP government was found to be pro-reformist with measures taken by continued to follow the path initiated by the Narashimha Rao Government. This stand of BJP was in sharp contrast to what it had observed during the beginning of the reform movement (Bromley et. al., 2004, p. 168). Under the BJP government, India tested five nuclear weapons and was widely criticized by most of the countries (Perkovich, 2003). The US government imposed a series of economic sanctions and the relationship between the two nations started showing down turn. But the Indian economy showed resilience and even the US congress and other western nations realized this fact and the sanctions were removed in a number of phases. The terrorist attack of September 11 2001 changed the scenario and the world under US leadership started considering terrorism as an international threat and India being a victim of Pakistan sponsored terrorism gave unequivocal support to US led war against terror (Perkovich, 2003).   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   But the real success of India has been observed in form of the success of software giants like Infosys, TCS, Wipro and many smaller ones (Bromley et. al., 2004, p. 209). These companies opened new era of business through outsourcing of jobs from US and this led to the advent of many of the US based MNCs like Accenture, IBM, GE and others investing a lot in India. The condition has become so different that the growth of Indian firms is dependent on US. Now the other sectors like retail, automobile, telecommunication etc. are getting large input through FDI channel (Perkovich, 2003).   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   Now this US supported growth of economy has made the government to follow foreign policies with extra care so that the interests of US must be taken into account and the mutually beneficial relationship between the two countries should remain intact. These things are clearly showing that the country’s stand on different international issues have started getting affected by economic policies of developed nations especially USA (Kapila, 2006). Mexico & Economic Liberalization   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   Now when we think of the continent of North America, we normally have the picture two economically very developed countries namely United States of America and Canada. But Mexico is another major economy of the region but with a different structure and status. Basically a developing country with a very unusual past when compared with other major countries of the region mentioned above, the country’s economic policy in the major part of 20th century had been without any vision. The political establishment has always supported various ideologies at the same time. The left centered administration and economic policy of Cuba and other left economies of the world got support from most of the Mexican government over the past 50 years but the same governments had reacted sharply against any move to any communist movement in the nation. Before 1970, the economy policy of Mexico was that of Private Public supported and investment by foreign companies had been given high priority. But after the massacre in the Plaza of the Three Cultures, the newly elected government started following an economic policy leaned more towards left philosophy of collective ownership. And despite flagging economic condition of the nation, populist policies for earning political mileage became a national policy. With every new government the country followed comparatively different policies creating more economic and monetary instability instead of any straight forward economic growth.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   The year 1994 saw the beginning of a new era in trade relations among three major countries of North America. With the launch of North America Free Trade Agreement i.e., NAFTA, comprising of world’s two most powerful economies USA and Canada and Mexico, the whole economic situation of the region has become a matter of close observation. If the case of Canada and USA is looked upon, there already exist a number of bilateral agreements on issues from defense, border security to trade and commerce. But from Mexican point of view, NAFTA has been much more than a simple regional trade agreement. Despite being a platform for boosting trade, participation of Mexico in NAFTA has been seen as the most effective tool to achieve two important missions. The first one has been for the purpose of directing the Mexican economy to an export-led growth path on a non-inflationary note. With USA as the major economic partner, NAFTA has been seen by the Mexican government as a platform to initiate large scale export to its much superior economic partner. Internal structure of the Mexican economy has also been made available on a platter to get a change with new set policies ensuring free trade initiative and drastic reduction in terms of tariff and quota issues to promote intra-regional trade (Moreno-Brid, Validiva & Santamaria, 2005). The second objective of the above mentioned reform process was to make the nature of this process very much irreversible. Though NAFTA accord has made sufficient provision so that any attempt to be made by the governments of future would cause the imposition of international legal and extra-legal constraints thereby deterring any attempt of returning back to the days of trade protectionism but the Salinas administration along with other supporters blunted all attack by its opponents to ensure the path of reform unruffled. The whole purpose of this treaty for Mexico was to make the nation a very lucrative region for the manufacture of products that can easily be exported to USA (Moreno-Brid et. al., 2005).   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   Now it’s been more than a decade that NAFTA came into existence and if the economic condition of Mexico is viewed then the expectations that was raised by the Salinas govt. has actually been partially satisfied. The country has made considerable economic advancements and these things are visible through the era of small budget deficit, low inflation that followed the treaty. Also the export of non-oil products has reached a very high level with surge in Foreign Direct Investment (FDI). But euphoria associated with the treaty loses its charm when the number of jobs being created in the liberated economy is taken into account. The rate of growth of the GDP is still below the level which the economy had attained in days much prior to liberalization (Moreno-Brid et. al., 2005).   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   So, for Mexico, the outcome of being a part of NAFTA has been very limited. If the limited gains are compared with what had been expected before, the NAFTA will appear more as a failure than a success. In 1994 only, the possibility of this sort of result was predicted by eminent US political scientist Stephan Krasner. While depending on realist model Krasnar had clearly stated that though NAFTA is an excellent attempt to have a very beneficial regional agreement but from Mexico point of view it’s not going to yield any golden egg (Bromley, Mackintosh, Brown &   Wuyts, 2004, p. 264). The extreme differences in the business culture and size of the economies of US and Mexico will be a very important reason behind the limited success of the agreement and expecting a broad result of something like the one between US and Canada can never be achieved (Extra Material, p. 10). The economic policy of US has been more of imperialistic in nature. This very US policy gives rise to anti-Americanism. The actually reduces the extent up to which both US and Mexico could have cooperated. This is very much in agreement to Waltz theory, which has clearly mentioned that the international condition is very much anarchic due to the lack of any common controlling authority giving rise to the issues like national threat and rise of conflicts of both military and economic nature (1979). As a result the apprehensions with rich nations fearing the rise of issues of illegal migration and human trafficking, the extent of cooperation between the two states will be very limited (Hollifield, 2006). The states would be found more concerned towards maintaining their relative power in relation one another and at the same time will avoid any permanent loss to existing relationship (Bromley et. al, 2004, p. 278).   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   The much analyzed theory of Waltz on International Relations valid in almost every case can be successfully used in understanding the case of NAFTA and Mexico. The history is full of differences and conflicts between the US and Mexico with Mexico in constant fear of losing its sovereignty. This fear and the bitterness of past has always been a very important reason the success of any pact between Mexico and US. Even the economic policy of US has widely been considered as imperialistic and Mexico had become a part of NAFTA for increasing its exports especially to US, hence the Mexican establishment will always be under the influence of US economic policies and decisions and may have to modify its international economic and business policies to suit US and the economic benefits Mexico is having with the trade with such a large neighbor (Bromley et. al, 2004, p 264).    So the fear of the past was the invasion over geographical boundary with Mexico preferring Latin culture rather the pro-US North American trend. Now becoming a very important part of North American economic group, Mexico is under continuous fear of losing is control over its economy and currency and might have to face economic colonialism.   References Hollifield, J. F. (2006). Trade, Migration and Economic Development: The Risks and   Ã‚  Ã‚   Rewards of Openness. Retrieved June 01, 2007, from the World Wide Web:  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   http://www.dallasfed.org/news/research/2006/06migr/hollifield.pdf Moreno-Brid, J. Validiva, J. C. R. & Santamaria, J. (2005). Mexico: Economic growth    exports and industrial performance after NAFTA, Economic Development Unit.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   Retrieved June 01, 2007, from the World Wide Web:  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚     Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   http://www.wilsoncenter.org/news/docs/Mexico_after_NAFTA_ECLAC.pdf Bromley, M. Mackintosh, W. Brown & M. Wuyts (2004).  Making the International: Economic Interdependence and political Order. Pluto Press Waltz, K. N. (1979).Realist Thought and Neorealist Thesis 1979. Journal of International   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   Affairs.   Retrieved June 01, 2007, from the World Wide Web:  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚     Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   http://classes.maxwell.syr.edu/PSC783/Waltz44.pdf Govt. of India (2005). Investing in India Foreign Direct Investment -Policy & Procedures   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   Department of Industrial Policy & Promotion, Ministry of Commerce & Industry,   Ã‚  Ã‚  Ã‚  Ã‚   Government of India, New Delhi Retrieved June 01, 2007, from the World Wide   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   Web:  Ã‚   http://dipp.nic.in/manual/manual_03_05.pdf Kapila, S. (2006). Iran’s nuclear issue: India well advised to be objective. South Asia   Ã‚  Ã‚   Analysis Group  Retrieved June 01, 2007, from the World Wide   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   Web:http://www.saag.org/%5Cpapers17%5Cpaper1694.html Perkovich, G. (2003).The measure of India: what makes greatness? 2003 Annual   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   Fellows’ Lecture, University of Pennsylvania. Retrieved June 01, 2007, from the   Ã‚  Ã‚  Ã‚   World Wide Web:  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   www.sas.upenn.edu/casi/publications/Papers/Perkovich_2003.pdf   

Wednesday, October 23, 2019

Project on Motivation of Nurses Essay

The most traumatic and stressful moments of an individual’s life are when he or she is taken ill. Nurses are synonymous with care and attention in times of need such as these. In a world mostly driven my personal ambition and corporate profit, nurses with their commitment to patient welfare and selfless service, provide a contrasting study. A nurse acts as a savior in distress and is often called upon to make great personal sacrifices in the discharge of her duties. The profession of nursing, is therefore, not merely a ‘job’ and the potentially powerfully insights about commitment to work that they could provide encouraged us to choose them as our subjects of study. As part of the Phase I of the project, we interviewed four nurses who were diverse in the amount of experience each had, the hierarchy in which they were working in the hospital and the backgrounds from which each of them came. This was done to study the commitment of employees towards an organization and understand the various factors which cause the same we have selected a study of nursing staff in hospitals. From the interviews, many broad themes emerged, all of which point correlate positively with their high commitment level to the organization. These points logically lead us to our hypothesis as to what keeps them committed to their place of work.   However, there were some key themes which we noted across all four of our respondents. All the nurses were very excited by the kind of recognition that the hospital was willing to give them. They seemed to treat this as a reward for their hard work and dedication and were motivated by it. All nurses were also impressed by their working relationship with their superiors (Head Nurse/Doctor) in the hospital who treated them as members of a family and with much respect. Nurses were also willing to stay on with the hospital because it provided them with opportunities for personal development. Accordingly our hypotheses are as follows: * There is a positive correlation between the amount of recognition that the nurses receive for their work and their commitment to the organization * There is a positive correlation between the positive relationship between the doctors/supervisors and the nurses and the commitment of the nurses to the organization * There is a positive correlation between the opportunity for learning and personal development that the organization provides and the commitment of the nurses to the organization RESEARCH METHODOLOGY Based on our earlier survey of 4 respondents, we determined that the following three variables play a key role in determining the commitment of nurses to their respective hospitals: * Recognition * Relationship with superiors * Learning and Development We have then tried to identify whether these three variables actually affect commitment levels of nurses at various nursing organizations. For the same, we carried out a survey of 30 nurses with diverse backgrounds (a detailed description of the respondents is covered in the following section) and questioned them on a scale of 5 to 1 (5: Strongly Agree, 1: Strongly Disagree) on 24 questions. Since our hypothesis determines 3 variables as affecting commitment, we tested the existence of each these three independent variables by framing four questions for each variables. Similarly, we determined commitment levels of the nurses through a set of 12 questions. By using a 5-point scale, we have captured not only the  existence, but the extent of existence of these variables. Since we found the responses to be reliable, we determined the correlation between the three independent variables and the dependent variable ‘commitment’ to see whether our hypothesis of is correct. Hospitals chosen for the survey We surveyed nurses from 4 different hospitals to ensure diversity. The organizations range from a large hospital located in a city like Bangalore, to mid-sized hospitals located in Tier-2 cities like Ajmer and Allahabad and a focused surgical-specialty hospital located in a smaller town like Varanasi. This selection lends diversity to our respondents through differences in location, specialization of hospitals, daily footfall (reflecting magnitude of work for the nurses), number of departments etc. Refer Appendix 1 for a description of the hospitals used for out survey. General Profile of the Nurses We have ensured diversity in our respondents while choosing nurses at all of the survey hospitals. The diversity ranges age, number of years worked at the organization and departments worked in. QUESTIONS IN THE SURVEY The questions in the survey were aimed at understanding the extent to which each of our three independent variables and our dependent variable ‘commitment’ was present in these organizations. We captured the different parameters relating to these variables by framing questions addressing various facets of these variables. Refer to Appendix 2 for detailed discussion on questions used for the survey. ANALYSIS OF RESULTS OF THE SURVEY Reliability of scales Reliability is used to check the consistency of the question set under consideration. In this survey we are testing if the 3 hypothesis that we have come up with are explaining the commitment of employees towards organization. Reliability in a survey is measured by lot of ways and here we are checking internal consistency reliability which indicates reliability within the survey for the responses of similar type of questions. Reliability of dependent variable * Affective Commitment: In affective commitment the questions that are all trying to test for one feature- Attachment to Organization. The questions are direct and straight forward bringing out the required feature thus making it a set of good reliability. We have obtained a reliability scale of 0.6 for this set which is the highest amongst the 3 different commitments scales that we have obtained. * Continuance Commitment: In this set we are trying to test the commitment by knowing the dependence of the person on the organization and how much of a change it would mean to him to switch jobs. The reliability that we obtained for this set is 0.53 which is a high value for a one time survey result. This is a good indication of question set focusing on the same core question. * Normative Commitment: This question set mainly tries to identify the belongingness the person has towards the organization. The question set is clear in conveying the same objective but this attribute is not so direct and easy to understand from an individual’s perspective. Hence the survey respondents’ answers in this set have a lower value reliability of 0.39. Reliability of independent variables For first and third variables (recognition and learning & development), the reliability is low at 0.46 and 0.47 respectively while for the second variable (relationship with superiors), the reliability is relatively high at 0.66. Recognition and Learning & Development It’s easy to see why responses to questions around Recognition and learning and development score so low in reliability. They are quite different from each other and same person can have very different responses to each of them, if they interpret the different questions to indicate different things. In comparison the questions related to relationships were fairly interrelated and are able to fetch more consistent responses. Overall reliability of all the responses is still lower than 0.7 which can be attributed to the fact that no pilot study was done, which could have been used as input for framing questions in a better way as to improve reliability of responses. Relationship with superiors We obtained highest reliability for responses for this hypothesis. Reliability for this was found to be 0.66, which is near acceptable range. The reason for higher reliability in this question set (Appendix 2) is attributable for ease of understanding the questions. Every person has a fair idea about relationships and these questions although quite different from each other give a fairly good direction the responder in terms of what is being asked. So they maintain consistency and hence higher comparative reliability. Correlation between predictor and dependent variables a) Affective Commitment – According to our expectation before the actual test results we came up with all 3 predictors are going to affect the Affective commitment. Recognition was one factor because the appreciation helps the employee (nurse) develop a connect with the organization. Also opportunities for personal development and training opportunities to assist this makes employees feel good which is essential to improve the commitment levels. Good relationships with superiors definitely help people develop an emotional bond with the organization. b) Continuance Commitment Continuance commitment describes how the employee feel about staying longer in the organization and for this one of the most important reasons is economic considerations which are to some extent explained by rewards and recognition. The recognition obtained will motivate them to work better and stay longer in an organization. Also constant opportunities for growing and training will help them stay committed to organization. c) Normative Commitment This form of commitment mainly explains the sense of giving back to the organization and the predictor which we identified was most important was the relationships with supervisor (Head nurse/Doctor). Training and developments reflect investment done by organization on employees, so employees feel an obligation to stay committed to organization to pay it  back, which increases their normative commitment. Actual results: Statistical Significance of correlations With the acceptable alpha level being 0.05 for social science research and no. of responders being 30, (Dof = 28) gives critical value of correlation as 0.361. When compared with the results we obtained we notice that out of 9 correlations, 3 are not significant (value less than 0.361) and 6 are significant (Value higher than 0.361). When it’s significant we reject null hypothesis of no relationship and accept alternative hypothesis of existence of relationship. Hypothesis 1: From the results it’s apparent that hypothesis 1 is partially supported as only one correlation is significant out of 3. That is with affective commitment. So we can interpret that Recognition affects affective commitment but not the other two types of commitments. Hypothesis 2: For second hypothesis 2 out of 3 correlations are significant so it’s also partially supported. Relationship with doctors is not related to continuance commitment but related to affective and normative commitment. Hypothesis 3: Hypothesis 3 is completely supported as all the correlations are significant i.e. higher than 0.361. It means learning and development is related to all the dependent variables and affects all three dimensions of commitment i.e. Affective, Continuance, & Normative which are considered here. Plausible explanation of variations between correlations Recognition: Recognition showed the highest correlation with affective commitment, while significant correlations were not established with the continuance and normative commitment. When nurses are recognized by the organization they tend to develop an emotional connect with the organization. They feel happier working in the hospital and a sense of belonging is nurtured within them. The appreciation received for the work translates into an attachment with the organization. This explains the  correlation with affective commitment. The recognition is mostly in the form of awards and words of praise not monetary in nature. Therefore a significant continuance commitment has not been established. Similarly, recognition for work does not lead to a feeling of obligation towards the organization. Rather, the positivity generated by the appreciation of work manifests itself as an emotional attachment towards the organization, which is reflected in the correlation with affective commitment. Relationship with superiors: This independent variable showed highest correlation with normative commitment and lower correlations with affective commitment, while correlation with continuance commitment was insignificant. The nurses tend to view the exceptionally good relationships with the doctors as being facilitated by the hospital. They therefore feel indebted to the hospital for providing them with an excellent working environment, which would be missing at other places. The moral obligation that they feel towards their hospital for the respect and dignity with which doctors and supervisors treat them is translated into a high correlation for normative commitment. Learning and development: It is seen from the results of the survey that learning and development has a high correlation with Affective Commitment to the organization and comparatively low correlation with continuance and normative Commitment. Intuitively we expected high correlation between learning and development a nd normative commitment to the organization. This is because employees would feel an obligation towards an organization that invests time and resources to train its employees and ensure their personal development. However, we realize these may not necessarily hold true once we take into account the atypical nature of the nursing profession. Nurses feel sense of duty towards their patients and ethical and moral obligation to serve the sick to the best of their capacity. In fact, nurses who have been trained well, dealt with varied patient cases and experience a great deal of learning would perhaps experience higher motivation to the society. Hence perhaps they would experience low moral responsibility to staying back in the organization. We believe existence of a caring and people centric management could be the reason for the high correlation between affective commitment and learning and development. An organization that has caring and people friendly management would earn emotional loyalty from its employees due to care and good treatment given to employees. Such an  organization would also take efforts to ensure that its human resources constantly learn and develop so as to contribute to the success of the organization.

Tuesday, October 22, 2019

Indulgences and their Role in the Reformation

Indulgences and their Role in the Reformation An ‘indulgence’ was part of medieval Catholicism and a major trigger to the Protestant Reformation. Basically, indulgences could be purchased in order to reduce the punishment you were owed for your sins. Buy an indulgence for a loved one, and they would go to heaven and not burn in hell. Buy an indulgence for yourself, and you neednt worry about that pesky affair youd been having. If this sounds like cash or good deeds for less pain, that is exactly what it was. To many holy people like Martin Luther, this was against Jesus, against the idea of the church, against the point of seeking forgiveness and redemption. When Luther acted against it, Europe had evolved to the point that it would split in the revolution of the Reformation. What They Did The medieval western Christian church – the Eastern Orthodox church was different and not covered by this article – included two key concepts which allowed indulgences to occur. Firstly, you were going to be punished for the sins you accumulated in life, and this punishment was only partly erased by good works (like pilgrimage, prayers or donations to charity), divine forgiveness and absolution. The more you had sinned, the greater the punishment. Secondly, by the medieval era, the concept of purgatory had developed: a state entered after death where you would suffer the punishment which would reduce your sins until you were free, so you weren’t damned to hell but could work things off. This system invited something which would enable sinners to reduce their punishments in return for something else, and as purgatory emerged so bishops were given the powers to reduce penance. This developed in the crusades, where you were encouraged to go and fight (often) abroad in return for your sins being canceled. It proved a highly useful tool to motivate a worldview where the church, God,  and sin were central.From this, the indulgence system developed. Do enough to earn a full or ‘Plenary’ indulgence from the Pope or lesser ranks of churchmen, and all your sin (and punishment) was erased. Partial indulgences would cover a lesser amount, and complex systems developed which claimed to tell you to the day how much sin you’d canceled. Why They Went Wrong This system of reducing sin and punishment then went, to the eyes of many Reformation reformers, hideously wrong. People who didn’t, or couldn’t, go on crusade wondered whether some other practice might allow them to earn the indulgence. Perhaps something financial? So the indulgence came to be associated with people ‘buying’ them, whether by offering to donate sums to charitable works, to buildings to praise the church and all the other ways money could be used. This began in the thirteenth century and developed, to the point where government and church were creaming off a percentage of the funds, and complaints about selling forgiveness spread. You could even buy indulgences for your ancestors, relatives, and friends who were already dead. The Division of Christianity Money had infested the indulgence system, and when Martin Luther wrote his 95 Theses in 1517 he attacked it. As the church attacked him back he developed his views, and indulgences were squarely in his sights. Why, he wondered, did the church need to accumulate money when the Pope could, really, just free everyone from purgatory by himself? The church divided into fragments, many of which threw the indulgence system entirely out, and while they didn’t cancel the underpinnings, the Papacy reacted by banning the sale of indulgences in 1567 (but they still existed within the system.) Indulgences were the trigger to centuries of bottled up anger and confusion against the church and allowed it to be cleaved into pieces.

Monday, October 21, 2019

Free Essays on Good Emperors

The Five Good Emperors known as Nerva, Trajan, Hadrian, Antoninus Pius, and Marcus Aurelius, were a series of excellent emperors who ruled in Rome from 96-180 AD, following the Flavian Dynasty. They were called this because they won the support and support of the senate, which is something their predecessors had been unsuccessful to do. The period of the five good emperors was mainly famous for the peaceful way of succession. Each emperor chose his successor by adopting an heir, preventing the political chaos related with the succession both before and after this period.(1) The first of these great emperors was Marcus Cocceius Nerva, ruling from 96-98 AD, who was selected to take the throne by the assassins of the prior emperor, Domitian. He was an old-fashioned man who promised to deal with the senate fairly and never put one of its members to death. The key things that characterize the control of Nerva are his excellent relations with the senate, his achievement of Dominitan's projects, his immense amount of expenses on securing public good will, his effort to boost resident loathe for Dominitan, and the fact that he initiated an arrangement of adopting heirs to make certain the run of the best candidates. He adopted Trajan to be his heir, and thus inheriting the throne after him. The second emperor, Trajan, was in power from 98-117 and began his reign with display, killing all the leaders of the group who had humiliated Nerva. He was named Optimus Maximus, meaning the best because of his respect for the senate and a series of foreign wars in which he attempted to expand the empire. He is well known for his assistance to public services, including a raise in the free distribution of food, the repair of roads, and the construction of the Forum, Market, and baths of Trajan. He adopted Hadrian, who became his heir. Publius Aelius Hadrianus, Hadrian, the third of the great emperors to rule Rome, was in power from 117-138. His first ac... Free Essays on Good Emperors Free Essays on Good Emperors The Five Good Emperors known as Nerva, Trajan, Hadrian, Antoninus Pius, and Marcus Aurelius, were a series of excellent emperors who ruled in Rome from 96-180 AD, following the Flavian Dynasty. They were called this because they won the support and support of the senate, which is something their predecessors had been unsuccessful to do. The period of the five good emperors was mainly famous for the peaceful way of succession. Each emperor chose his successor by adopting an heir, preventing the political chaos related with the succession both before and after this period.(1) The first of these great emperors was Marcus Cocceius Nerva, ruling from 96-98 AD, who was selected to take the throne by the assassins of the prior emperor, Domitian. He was an old-fashioned man who promised to deal with the senate fairly and never put one of its members to death. The key things that characterize the control of Nerva are his excellent relations with the senate, his achievement of Dominitan's projects, his immense amount of expenses on securing public good will, his effort to boost resident loathe for Dominitan, and the fact that he initiated an arrangement of adopting heirs to make certain the run of the best candidates. He adopted Trajan to be his heir, and thus inheriting the throne after him. The second emperor, Trajan, was in power from 98-117 and began his reign with display, killing all the leaders of the group who had humiliated Nerva. He was named Optimus Maximus, meaning the best because of his respect for the senate and a series of foreign wars in which he attempted to expand the empire. He is well known for his assistance to public services, including a raise in the free distribution of food, the repair of roads, and the construction of the Forum, Market, and baths of Trajan. He adopted Hadrian, who became his heir. Publius Aelius Hadrianus, Hadrian, the third of the great emperors to rule Rome, was in power from 117-138. His first ac...

Sunday, October 20, 2019

Lesson Plan Writing Tips for Teachers

Lesson Plan Writing Tips for Teachers Lesson plans help classroom teachers to organize their objectives and methodologies in an easy to read format. Difficulty: AverageTime Required: 30-60 minutes Heres How to Write a Lesson Plan Find a lesson plan format that you like. Try the Blank 8-Step Lesson Plan Template below, for starters. You may also want to look at lesson plan formats for language arts, reading lessons, and mini-lessons.Save a blank copy on your computer as a template. You may want to highlight the text, copy, and paste it onto a blank word processing app page instead of saving a blank copy.Fill in the blanks of your lesson plan template. If you are using the 8-Step Template, use these step-by-step instructions as a guide for your writing.Label your learning objective as cognitive, affective, psychomotor, or any combination of these.Designate an approximate length of time for each step of the lesson.List the materials and equipment needed for the lesson. Make notes about those that need to be reserved, purchased, or created.Attach a copy of any handouts or worksheets. Then you will have everything together for the lesson. Tips for Writing Lesson Plans A variety of lesson plan templates can be found in your education classes, from colleagues, or on the Internet. This is a case where it isnt cheating to use somebody elses work. Youll be doing plenty to make it your own.Remember that lesson plans come in a variety of formats; just find one that works for you and use it consistently. You may find through the course of a year that you have one or more that fits your style and the needs of your classroom.You should aim for your lesson plan to be less than one page long. What You Need: Lesson Plan TemplateWell-Defined Learning Objectives: this is a key element, everything else flows from the objectives. Your objectives need to be stated in terms of the student. They have to be something that can be observed and measured. You have to list specific criteria for what is an acceptable outcome. They cant be too long or overly complicated. Keep it simple.Materials and Equipment: You will need to ensure that these are going to be available for your class when the lesson is being taught. If you are too ambitious and require items that your school doesnt have, you will need to rethink your lesson plan. Blank8-Step Lesson Plan Template This template has eight basic parts that you should address. These are Objectives and Goals, Anticipatory Set, Direct Instruction, Guided Practice, Closure, Independent Practice, Required Materials and Equipment, and Assessment and Follow-Up.   Lesson Plan Your NameDateGrade Level:Subject: Objectives and Goals:   Ã‚  Ã‚   Anticipatory Set (approximate time):   Ã‚  Ã‚   Direct Instruction (approximate time):   Ã‚  Ã‚   Guided Practice (approximate time):   Ã‚  Ã‚   Closure (approximate time):   Ã‚  Ã‚   Independent Practice: (approximate time)   Ã‚  Ã‚   Required Materials and Equipment: (set-up time)   Ã‚  Ã‚   Assessment and Follow-Up: (approximate time)

Saturday, October 19, 2019

Marcel Duchamp Prefigure Walter Benjamin's Thesis Essay

Marcel Duchamp Prefigure Walter Benjamin's Thesis - Essay Example The essay "Marcel Duchamp Prefigure Walter Benjamin's Thesis" explores Walter Benjamin's thesis and Marcel Duchamp. Art has evolved since it was first discovered and the reason behind all these forms of evolution is to ensure that the production of works of art suit the aesthetic needs of the people to whom it is presented. A key point in global art history in terms of evolutionary art is the early 20th century when Benjamin Walter hypothesized and further came out with an essay on art in the age of mechanical reproduction. In this paper, the ways in which the work of Marcel Duchamp prefigures Walter Benjamin’s thesis in his essay shall be analyzed. Marcel Duchamp has been a major contributing icon to the work of art, especially in the 20th century. The 1887 French born had the opportunity of having his first exhibition in 1908 in what was termed Salon d’Automne through the influence of his brother . But since then, Duchamp took a lot of control over what he could do as an artistic personality. It is not for nothing that Perloff notes that Duchamp’s readymades now commands sky-high prices, with people applying for permission to reproduce some of his related images in a scholarly book on modernism paying as much as $200 apiece. This means that Duchamp has continued to remain a very influential figure in art since the 1990s and continues to dominate modern artistic theories. As far as the mechanical reproduction is concerned, a number of great pieces of art works could be attributed to Duchamp., most of which shall be discussed into detail in later sections of the paper. However, it is worth mentioning that the influence of Duchamp on art through the challenging commands of conventional thoughts he had over artistic processes gave so much scheme to what was yet to be born essay of Benjamin Walter3. Though it is said that Duchamp did not succeed in producing as many works of art as some of his predecessors and those that came after him, the fe w he did and some of his subversive actions predicted that he was a revolutionist of art who wanted the old aura to be replaced with a new one, which Walter later came to champion as a thesis in his essay. Overview of Walter Benjamin’s Thesis The major thesis of Walter Benjamin’s essay touches on the conceptualization that the form of technical reproduction of works of art that takes place today are not a modern phenomenon but that modernity has played a contributing factor in ensuring and enhancing much accuracy in the course of mass production4. Throughout the essay, this thesis is elaborate to more or less praise the role of modern artistic discoveries into making what used to be even better. The essay therefore analyses various for m of the development of mechanical visual reproduction including photography, stamping and engraving5. In each of these artistic practices, which in the opinion of Walter are not new but an exhibition of mechanical reproduction that has been with us for long, a new line of

Friday, October 18, 2019

SWOT Analysis assignment Case Study Example | Topics and Well Written Essays - 500 words

SWOT Analysis assignment - Case Study Example As for the screen particular The Galaxy S2 set the stage for astounding colour and vibrancy and the S3 takes that and includes additional land. The HTC One Xs screen is superb, yet preparation issues make it miss the top spot. In the event that we are speaking about electric storage device longevity, the HTC One X offers a somewhat nippier processor than the Galaxy S3. On paper, at any rate, it dominates the competition. Concurring to the polaroid specifications, iphone 4s. The rich, even tone gave by the 4s puts it somewhat in front of the One X. We havent had the ability to appropriately test the S3s Polaroid yet on paper in any event, It isnt putting forth any significant upgrades over the S2. Without having not less than a couple of hours to play around with the S3 , it’s challenging to know exactly how accommodating its additional programming characteristics will be. Were quick to perceive how the new Touchwiz contrasts with Sense 4.0, however were still enormous devotees of ios straightforwardness. The study has found that Apple is the ruler of the market at present. As the CEO of this company the Analysis of this vertical it is recommended that the company should focus more on its strengths and leverage on available market opportunities. Some of the options available at present include diversification of products as well as geographical diversification into emerging economies; increase market share by adopting to low pricing strategies. Apple Inc. has found High demand of iPad mini and iPhone 5, and with the launch of iTV, a higher market penetration can be observed in the near future. A Strong growth of mobile advertising market and increasing demand for cloud based services will increase its overall market share

[narrative]A little learning is a dangerous thing Essay

[narrative]A little learning is a dangerous thing - Essay Example Little did we know that our mode of dressing was termed skimpy and irritated some people in Saudi Arabia. Our ignorance led to the climax of conflict when my friend Jerry attempted to shake the hands of a lady who was passing near our hotel room. This act created a scene that cost us endless explanations in a bid to explain that our intentions were not wrong. We were caught and locked up in a room for hours for indecent behavior. After long hours of discussion, we managed to convince the security men that we were simple visitors and had no intention of any kind with the lady. We even explained we found it courteous to greet people. We were later released with severe warnings. At the end of the day we learnt that had we researched more on the culture of the Middle East, we would not be in trouble for behaving badly. With the tough lessons learnt, life continues. My friends and I are very cautious especially when we do not have enough information on some aspects of

Tort Law Essay Example | Topics and Well Written Essays - 1500 words - 1

Tort Law - Essay Example This essay focusess on describing the tort laws, that today can mainly be divided into three huge parts including negligence torts, strict liability torts and intentional torts. The researcher explores the intentional torts in the essay that are offences that are committed by an individual who intends to harm as he commits the act knowing that injury would be the result of his or her act such as an assault. On the other hand, negligence is a type of tort that results unexpectedly. The action is normally not intended to harm while the actor does not know the result of the act. The researcher states that in the negligence act, the action leading to injury is not intended unlike the intentional tort. For example, the trespass of land and negligence are different from nuisance case. For instance, the researcher mentiones that in the nuisance cases actions deal with repetitive injuries while the trespass and negligence actions offer relief even if the injury resulted from one event. In th e second part of the essay, the researcher discusses various compensatory law issues and vicarious liability. There are various goals of compensatory damages that were decribed by the researcher. The main goal of compensatory damages is compensate the personal injury and property damages, that were caused and proved. Vicarious liability mainly is another issue covered in the essay, the issue arises in regard to specific relationships between the defendant and another in the part of defendant to the other party.

Thursday, October 17, 2019

In a culturally diverse word, the universality of human rights remains Assignment

In a culturally diverse word, the universality of human rights remains unsettled. Discuss - Assignment Example The UK has enacted several legislation that safeguard the human rights of its citizens such as the Human rights Act 1998 that introduced in to domestic law the human rights safeguarded by the international law like European Convention of Human Rights like the right to life, right to a fair trial, freedom of expression, right of education, freedom from slavery and forced labour and freedom of religion. Key development was the Declaration of Human rights in 1948 by about 50 of the United Nations member countries and subsequent ratifications by other countries. Other international conventions that followed aimed at expanding the doctrine of human rights to include civil and political rights, cultural rights, sociol-economic rights and prohibition of all forms of discriminations (Claude and Weston, 2006). For instance, the International Covenant on economic, social and cultural rights was adopted in 1966 and ratified by several states. Human rights refer to the recognition and respect of human dignity. Human rights entail a set of moral principles and legal guidelines that promote and protect the identity, values and abilities of individuals in order to enhance the standards of living (Claude and Weston, 2006). ... This paper will discuss the contents and principles of human rights, the universality versus cultural relativism of human rights and finally outline the current trend in protection of universal human rights. In the conclusion, the paper will offer a recommendation on whether universality can exist with cultural relativism and ensure universal human rights. Contents and principles of human rights The first guiding principle of human rights is equality and non-discrimination. Non-discrimination acts as the basis of international human right law and is outlined in all the human rights treaties. International human rights conventions such as the International Convention on the Elimination of discrimination especially racial and women discrimination requires all state governments to enact legislations that protect the citizens from such discrimination (Claude and Weston, 2006). This principle is applicable to all human beings regardless of non-exhaustive criteria that include sex, religio n and other identifiable status of the individuals. According to Article 1 of the Universal Declaration of Human rights, all human beings are born free and equality in their dignity should be respected (Talbott, 2005). The second content of human rights is interdependent and indivisible nature of human rights. This principle asserts that human rights are interrelated and interdependent since enforcement of one rights leads to advancement of the other rights and the ultimate increase in the general standards of living (Claude and Weston, 2006). Accordingly, civil rights such as the right to life and political rights lead to equal protection by the law and fair trial. In addition, a violation of one right such as the

Business and Social Approaches to Social Media Essay - 1

Business and Social Approaches to Social Media - Essay Example Identifying how this particular tool is now being used and identifying the ways that it can work for others that are using the Internet is then creating a different approach to connecting online. Theories of Social Media The use of social media for businesses is one which relates specifically to the ability to connect with others online through specific mediums. The social media platforms consist of areas which users can interact and connect with other like-minded users. The growth of this has led to platforms such as Facebook, Twitter, Wikipedia and business areas where others can connect. The concept is now known as web 2.0, where interaction and user-generated content is providing more applications and alternatives for those that are online. The concept of using these tools is based on the demographics, ability to display a specific message to viewers and the ability to collaborate with business ideas that will attract potential customers to a business (Kaplan, Haenlein, 2010). Th e approach which is now being taken with social media has allowed the main concept to transform the way in which many are approaching business and interactions. When searching for the user-generated content, there is the ability to connect with others that are interested in specific ideologies, consumerism, and choices. A business can specify demographics, target markets and other concepts that are from a given profile. From this, there is the ability to transfer information and knowledge about the business and to create a connection to customers. This creates a social graph, in which one business connects to potential customers and begins to expand with the specific target markets that are available through the interactions and known interests that are listed on the various online portals (Qualman, 2011). The concept which has been used with approaching target markets has also led to the promotional mix is a model which is followed. This has been built into a hybrid model that is u sed for communicating with others and for interactions that are able to get specific results with online marketing. Consumer to consumer communications as well as promotions from businesses is the two main focuses of the hybrid promotional mix that is used for user-generated content. This occurs with the main promotion, advertisement or page that is listed on a social media site or website. The consumer then has the ability to focus on direct responses by commenting on the promotional tools with engagement. For businesses, this means that the promotions need to have positive responses from consumers while ensuring that the discussions work in favor of the business. The promotional mix that is needed is then based on gaining a sense of control with the promotions that are used for the social media portals (Mangold, Faulds, 2009). The interactions with customers and the way which this is associated with the promotional mix are then leading to the need to put the public relations of a business as the main priority. The amount of control that is a part of the user-generated content is based on finding a way to build credible forms of marketing and interactions that are online. The use of effective communication, ability to increase exposure and creating a strong presence and brand loyalty are some of the focuses that are a part of using social media online.  

Wednesday, October 16, 2019

In a culturally diverse word, the universality of human rights remains Assignment

In a culturally diverse word, the universality of human rights remains unsettled. Discuss - Assignment Example The UK has enacted several legislation that safeguard the human rights of its citizens such as the Human rights Act 1998 that introduced in to domestic law the human rights safeguarded by the international law like European Convention of Human Rights like the right to life, right to a fair trial, freedom of expression, right of education, freedom from slavery and forced labour and freedom of religion. Key development was the Declaration of Human rights in 1948 by about 50 of the United Nations member countries and subsequent ratifications by other countries. Other international conventions that followed aimed at expanding the doctrine of human rights to include civil and political rights, cultural rights, sociol-economic rights and prohibition of all forms of discriminations (Claude and Weston, 2006). For instance, the International Covenant on economic, social and cultural rights was adopted in 1966 and ratified by several states. Human rights refer to the recognition and respect of human dignity. Human rights entail a set of moral principles and legal guidelines that promote and protect the identity, values and abilities of individuals in order to enhance the standards of living (Claude and Weston, 2006). ... This paper will discuss the contents and principles of human rights, the universality versus cultural relativism of human rights and finally outline the current trend in protection of universal human rights. In the conclusion, the paper will offer a recommendation on whether universality can exist with cultural relativism and ensure universal human rights. Contents and principles of human rights The first guiding principle of human rights is equality and non-discrimination. Non-discrimination acts as the basis of international human right law and is outlined in all the human rights treaties. International human rights conventions such as the International Convention on the Elimination of discrimination especially racial and women discrimination requires all state governments to enact legislations that protect the citizens from such discrimination (Claude and Weston, 2006). This principle is applicable to all human beings regardless of non-exhaustive criteria that include sex, religio n and other identifiable status of the individuals. According to Article 1 of the Universal Declaration of Human rights, all human beings are born free and equality in their dignity should be respected (Talbott, 2005). The second content of human rights is interdependent and indivisible nature of human rights. This principle asserts that human rights are interrelated and interdependent since enforcement of one rights leads to advancement of the other rights and the ultimate increase in the general standards of living (Claude and Weston, 2006). Accordingly, civil rights such as the right to life and political rights lead to equal protection by the law and fair trial. In addition, a violation of one right such as the

Tuesday, October 15, 2019

Karl Marx- Manifesto of the Communist Party Essay

Karl Marx- Manifesto of the Communist Party - Essay Example ried on an uninterrupted, now hidden, now open fight, a fight that each time ended, either in a revolutionary constitute of society at large, or in the common ruin of the contending classes† (Marx). Accordingly, Karl Marx viewed societal structures as comprising effectively two components; namely the â€Å"bourgeoisie and the proletariat† in asserting that â€Å"our epoch, the epoch of the bourgeoisie, possesses, however this distinct feature: it has simplified class antagonisms. Society as a whole is more and more splitting up into two great hostile camps, into two great classes directly facing each other - bourgeoisie and proletariat† (Communist Manifesto, 1848). To this end, the underlying proposition of the Communist Manifesto is that the social class struggle under the capitalist social paradigm, whilst creating oppression of the â€Å"proletarians†, ultimately lends itself to the demise of capitalism through revolution. Indeed, Linklater posits that â€Å"the structure of world capitalism guaranteed the emergence of the first authentically universal class which would liberate species from the consequences of estrangement between states and nations† (In Devetak et al, 2007 66). Moreover, Larson et al refer to the argument that socialists embraced the task of working class mobilisation and that â€Å"the perspectives which socialist theorists can be divided are revolutionary trade union activity and revolutionary transformation of capitalist society (Larson et al, 38). On the one hand, if we consider this in terms of the contemporary socio-economic framework; continuous evolution of social structures and demise of entrenched class barriers would suggest that Marx’s â€Å"bourgeoisie and proletariat† class model may be redundant and therefore should be viewed as solely contextually in terms of the socio-political backdrop influencing Marx’s theory at the time (Bottomore 23). For example, Bottomore highlights that â€Å"changes in working class politics during

Monday, October 14, 2019

ERP Comparison of Developed and Emerging Markets

ERP Comparison of Developed and Emerging Markets Chapter 1: Introduction 1.1 Research Topic The investment dilemma hits when individuals earn more than their consumption needs. Considering the fast rising inflation globally, saving the surplus earnings for future consumption is not sufficient anymore. Hence, making an investment such that the surplus earnings grow or even multiply over time is almost imperative. Such an investment can be made in many ways for instance commodities, stocks, bonds, pension funds, real estate etc. This study is concerned with individuals investment in stocks. When an individual invests, he/she expects a certain rate of return in the future from the investment which should ideally compensate future consumption needs, future increase in inflation and uncertainty of return if any. Therefore, investments with higher returns are preferred. A number of studies find evidence of stocks giving higher return than government bonds, although the relative uncertainty of return from stocks being much higher than from bonds (Dimson et al, 2002; Ibbotson Senquefield, 1976). Consequently, the more uncertain the future return gets, the riskier it is to invest. Hence, when an individual invests in stocks, he/she expects added compensation for added risk which leads to the concept of Equity Risk Premium (ERP). ERP is the surplus return from stocks/equities over the return from nearly risk-free (here on mentioned as risk-free) asset such as government bonds.It is the premium that individuals demand for bearing the additional risk in equity investments (Reill y Brown, 1999). ERP is calculated using equation-1. Stock returns can be the returns from a benchmark index (market returns) such as FTSE 100 and the returns from risk-free asset (risk-free returns) can be those from UK gilts. (Reilly Brown, 1999) ERP is an important consideration from an investors point of view for building and analysing a domestic equity portfolio or an entire equity market especially for an investor looking to diversify globally (here on mentioned as global investor). Therefore, it is a widely researched topic, however yet the existing literature is inadequate, considering there are numerous debates and puzzles pertaining to various aspects of ERP. Hence, looking at its significance in theoretical practical finance, ERP is chosen as the central topic to be researched in this study. 1.2 Research Background Individuals (retail investors) use ERP to forecast the expected growth of their equity portfolios over long-term and for portfolio allocation decisions. Corporations (here on mentioned as organisations) need ERP as an input to determine the cost of equity i.e. the annual expected rate of return from investment in stocks and for capital budgeting decisions. Overall, ERP is a significant factor in most risk-return models of corporate finance and investment management. Hence, estimating future ERP and identifying possible reasons for the results found, is an important financial and economic research topic for academia and practitioners alike. Although historical data is most popularly used to estimate future ERP, there exist financial, economic asset pricing models developed over the years which predict an implied ERP based on companies, macroeconomic equity market data. Evidence from the relevant literature suggests that every ERP estimation method has a distinct set of assumptions a nd underlying ideas therefore exuding both merits and demerits when compared to another estimation method. Rapid economic growth of emerging countries has been apparent especially because of industrialisation. Consequently the performance of emerging equity markets has been remarkable in the past decade. The big 4 i.e. Brazil, Russia, India China (BRIC) alone, accounted for more than 50% of the world GDP in 2006 (RICS, 2008). Due to saturation in developed countries and growing avenues for investment in those emerging, the ERP of emerging markets has risen due to growing investor confidence. Although perceived social, economic political risks are equally high, financial systems have strengthened and macro-economic conditions have improved drastically for most emerging countries. Barry et al (1997) argues that investing in emerging markets is more than just profitable, considering the risk-return trade-off. Hence, gauging the future of emerging equity markets has become a vital research topic for economists, finance professionals and global investors alike. In a discussion of emerging markets, India cannot be left out. Post liberalisation (i.e. post 1991) India is definitely the secondmost preferred emerging economy by global investors after China. Although Foreign Direct Investment (FDI) flows have been average compared to other emerging countries, Foreign Institutional Investment (FII) flows increased almost 10 times, from United States Dollar (USD) 739million in 2002 to a record USD 7.59billion in 2003. CALPERS, the worlds biggest pension fund with a base of USD 165billion has recently included India in their list of countries for investment (BSE India, 2008). The noteworthy rise to the position of the sixth largest emerging equity market with a total market capitalisation of USD 818billion and 8% p.a. average economic growth (CIA Fact-book, 2008) over past decade accentuates the importance of Indias ERP estimation and analysis. 1.3 Research Gap, Objective Questions Most of the research on ERP has focussed on developed markets clearly because of their sound history and stable fundamentals. Within limited research conducted on ERP in emerging markets, Salomons Grootveld (2003) demonstrate the evident differences in ERPs of developed and emerging markets and claim that global business cycle influences these differences. Claessens (1995) argues through his empirical research that investment in emerging markets can be fruitful in long-term considering that high ERP compensates for high risk. Although these and similar related researches vaguely guide investors wanting to explore emerging markets, there lacks a clear evidence of the possible risks attached and whether those risks can be tackled to earn complete benefit of the high ERP. Bernartzi Thaler (1995) and Campbell Cochrane (1999) claim that the reason for increase in investors interest in U.S. markets was the high ERP it offered. Hence if the same rule is applied to emerging markets then i nvestments should be made without any prior estimation of possible risks, especially considering the success of U.S. markets. However it is not the case, as investors are still sceptical about getting confirmed high returns from emerging markets. Therefore, the precise reasons for the difference in ERPs of developed and emerging markets have not been clearly identified as yet, hence constituting the first research gap. There exists considerable evidence on how political, social and especially macroeconomic factors affect the equity market returns of developed countries, especially U.S. (Chen et al, 1986). Considering the limited work done on ERP of emerging markets on the whole, negligible contribution has been made to analysing ERP in India with respect to its growing economy, Mehra (2006) being the most notable, hence constituting the second research gap. Considering the importance of ERP it is interesting to note that in-spite of there being many ways to calculate ERP; there exists no consensus on the best approach. Financial market analysis is performed based on historical data and the ERP measured from past performance of equity markets is most commonly used as an estimate of future ERP. For instance Ibbotson Sinquefield (1976) exemplified first accurate calculations of annual rate of return on equity investments in U.S. and ERP. Since then, Siegel (1992) Dimson et al (2002) are two of the most notable researches on ERP estimations using the historical method. However, there exist models developed for instance by Fama French (2002) and Arnott Bernstein (2002) that determine future ERP entirely based on forward-looking information through estimation of future investors markets expectations. This variation of approaches to ERP estimation has only widened the range of results and complicated the unresolved debate, hence constitut ing the third research gap. The 3 research gaps identified above lead to the overall Research Objective of this study, which is: Comparative analysis of ERP in the leading developed emerging markets; determine the macroeconomic influence on ERP and examine the ERP estimation methods; all from a global investors point of view. It is not realistically possible to fill the research gaps entirely through this study considering time, knowledge and relevant experience constraints. However, this study aims to fulfil the above objective through the accomplishment of satisfying solutions to the following 3 Research Questions: After estimating future ex-post ERP in the chosen sample index of developed and emerging markets, what is the impact of risk responsible for the differences found through the comparison of their risk-return trade-off? What effect do the country specific macroeconomic factors have on the ERP in India, if any? After estimating future ex-ante ERP in India using a supply-side method and comparing it with the estimated ex-post ERP, what is the most suitable method for global investors, if at all, and why? Research Contribution As this study is predominantly aimed at analysing the ERP of leading emerging markets and particularly India, it is hoped that this study contributes to simplify the decision making of global investors regarding their equity investments in emerging markets India. Furthermore, it is hoped that this study provides guidance to the global investors regarding the macroeconomic situation in India and its influence on the ERP, for sound portfolio management. Moreover, it is hoped that this study adds a small brick to the large edifice of ERP analysis/measurement/estimation on the whole. Finally, if this study motivates the eminent researchers and consequently triggers some ground breaking academic scholarship regarding the ERP of emerging markets, then the worthiness of this study will be truly identified. 1.4 Research Structure The following is the chronology and brief content of the chapters in this study here on: Chapter 2: Literature Review: This chapter aims to explain the historical development of ERP through empirical researches and relevant theoretical background. Furthermore, it examines important research literature on ERP estimation methods and emerging equity markets. Chapter 3: Overview of Research Methodology: This chapter aims to briefly explain the chosen research methodology for this study and justify its appropriateness. It also describes the chosen data collection method and clarifies how the data will be collected used for achieving the research objective. Chapter 4: Data Analysis, Findings Interpretative Analysis: This chapter aims to identify the collected data, explain the data analysis technique/model/method in detail, analyse the data that is collected by using the chosen methods models; and finally, interpret, examine evaluate the results/findings from the analysis to identify justifiable solutions to the research questions. The chapter is divided into 3 parts, each part pertaining to each research question and the procedure is conducted separately for each. Chapter 5: Discussion Conclusion: This chapter aims to summarise the results from chapter 4, recapitulate the entire paper and testifies the level of fulfilment of the research objective. Also, it plausibly links the past literature results from this study to check the level of accomplishment in filling the research gap and to identify the need for future study. Chapter 2: Literature Review 2.1 Chapter Introduction ERP is a vital numerical figure in practical modern finance as it is considered by financial analysts, business managers and economists for the purpose of decision-making; perhaps best testified by Welch (2000, p.501) wherein he calls ERP the single most important number in financial economics. Consequently, it is and has been one of the most fascinating topics for academic scholarship leading to vast amount of literature. This Chapter discusses the various significant perspectives about ERP generated from the literature. The literature reviewed in this chapter is primarily related to the research questions that this paper aims to answer; having said that, other theoretical developments and empirical researches in the field of portfolio management and corporate finance that are significantly relevant to the research topic, are also discussed. Broadly speaking, the content matter in this chapter is organised in chronological order beginning from the earliest. Here on this chapter is divided into 5 sections. The historical advancements in productive assessment of the relationship between equity risk and return resulting from empirical researches which lead to the conceptualisation of ERP is discussed in section-2. The next section-3 highlights the important theoretical developments which laid the foundation for the large edifice of researches on investment management. Section-4 focuses on the models/methods that were formulated based on the theories, with an aim to calculate expected returns and measure estimate ERP. It also looks at the important contemporary researches in the field of ERP with a brief backdrop of macroeconomic factors. The following section-5 highlights the important literature with respect to the ERP Puzzle. It discusses the significant attempts by researchers to solve the puzzle. The next section-6 follows which briefly looks at the important literature on emerging equity markets overall. Finally, section-7 summarises the entire discussion. 2.2 Historical Conceptualisation of ERP The apt risk-return trade-off sought by investors worldwide augmented the importance of ERP evaluation and forecasting. Consequently, vast theoretical empirical research under various objectives has been conducted till date since the early 20thcentury on measuring, estimating and analysing ERP, most of which has concentrated on the developed markets, especially U.S. Furthermore, eminent financial economists have been engaged in empirical analysis of past investment results to gauge future investment strategies. In the late 19th and early 20thcenturies, most economists did not endorse the importance of risk in evaluating and justifying excess returns. The conception of the fact that incremental profit on equity investments is a result of the higher risk attached, was a gradual process. For instance, Clark (1892), professor at university of Columbia, claims that investments in some organisations give higher returns than risk-free rate some other organisations because those organisations have an advantage of monopoly in the market. Furthermore, modernisation and development in technology lead to comparatively higher competitive advantage which in turn gives excess returns. However, renowned author of the book Risk, Uncertainty and Profit, Knight (1921), does not endorse Clarks view but instead criticises him for inadequately exploring the association of risk and return in the models used in his economic research. Knight analyses the importance of risk in equity investments through past performance of U.S. markets and aimed at relating it to the concept of profit in the basic economic theory. He argues that any kind of risk deserves a premium (i.e. excess returns), even if the risk is unquantifiable (which he later termed as uncertainty), although, he could not suggest any solid and foolproof way of measuring the premium that he justified. As a cumulative result, the debate on equity risk and the attached premium flared up which necessitated ground breaking empirical researches based on historical data of past performance. Hence, many scholars developed stock price indices in early 20thcentury in order to measure long-term investment performance and estimate future returns; For instance, Mitchell (1910, 1916), Persons (1916, 1919), Cole Frickey (1928) in the U.S. and Smith Horne (1934) and Bowley et al (1931) in the U.K. However, Hautcoeur et al (2005) in their analyses of early stock market indices; argue that the main motive in development of these indices was forgotten in no time and instead they were used to gauge the influence of macroeconomic cycles on equity markets and as an easier way to estimate macroeconomic fluctuations. The popular index of 30 stocks developed by Charles Dow was never aimed at estimating future long-term returns but instead to measure daily returns on the market. Consequently, the relevance of the returns from risk-free assets like government bonds to comparatively risky equity returns was tested. The difference in their rate magnitude of returns solidified the so far debated idea of returns being a compensation of the risk attached to the investments made. Smith (1924) advocates through empirical research and later through his book that; equities give higher returns than bonds because they carry higher risk. He collected historical data on stock prices, dividends and corporate bonds from the stock exchanges at Boston and New York spanning 1866-1923. Furthermore, he divided this period into 4 sub-periods to recognise the economic development. After creating separate portfolios for each asset class (10 securities in each portfolio), he measured cash income and capital gains from both. Equity investments give higher appreciation and returns than bonds in the long-term in-spite of economic changes in the sub-periods, was his conclusion. Further in his book, he suggested a mechanical way of calculating ERP by paying out the equivalent amount of bond returns from the total equity returns and re-investing the remaining in the same equity portfolio. In this way, the relative growth rate of the equity portfolio is the ERP over the bond portfolio. Smiths estimation and method of ERP calculation attracted many retail investors towards the equity markets in 1920s. Later, Smiths attempt to assess equity investment returns over bonds; was improvised by Cowles (1938). He collected historical data on most of the stocks of NYSE instead of only 10 for the period 1872-1937 and notably created the first nearly-accurate index of total returns from common stock investments. Furthermore, he suggested of re-investing the dividend yields into the equity portfolio to save from measuring cash returns and value appreciation separately, the way Smith did. However, he made no concluding remarks such as equity investments can be more profitable than bonds, unlike Smith. By then, although the idea of an ERP was making financial economic sense, a solid way of estimating future ERP could not be developed yet; the two main reasons being the unavailability of adequate historical equity market data and the ignorance about the possibility of a forward looking method. However later, John Williams (1938) wrote the first book that defined; modelled and estimated forward looking ERP. Although he estimated future ERP in U.S. using Dividend Discount Model (DDM), he argued that ERP estimates based on Historical Method are equally precise. He believed that the most suitable way to calculate the riskiness of a security is by appending a premium to the risk. Later, he also became the first researcher to numerically estimate a forward looking ERP for U.S. By then, the concept of ERP had been clearly understood and its importance had been recognised. Nearing late 1940s economists and researchers had realised the importance of risk and conceptualised ERP as an essential ingredient to calculate future returns on equity investments. Moreover, enough historical data of U.S. equity markets was also available for past performance analyses and empirical researches. Even so, there was no method/measure that could quantify future risk and returns for any given portfolio of investments, as most experts and investors believed in calculating risk-return trade-off individually for equities and other securities. However, that did not serve the purpose of optimal risk-return trade-off as far as entire portfolio of investments was concerned, until 1952 when crucial theoretical developments began. 2.3 Theoretical Developments This section summarises the important theoretical developments which built models to quantify future risk and returns of equities and related vital researches in portfolio investment management and corporate finance, with a backdrop of their implications on ERP. The 4 most important theories/models reviewed in this section are Portfolio Theory, Capital Market Theory, Capital Asset Pricing Model and Arbitrage Pricing Theory. 2.3.1 Markowitzs Portfolio Theory Harry Markowitz (1952) introduced the Portfolio Theory or now what is called the Modern Portfolio Theory (MPT). It provides a formalised method to diversify the portfolio of all investments (not just equity) with an aim to achieve highest possible returns for lowest possible risk. MPT records expected returns, volatility or risk (standard deviation) for each investment and correlation of one investment to another to create the best combination. Therefore, risk is minimised while maintaining the expected returns, if investments are diversified based on the risk of each individual investment. However, Markowitz (1952) assumed that investors are naturally risk averse, i.e. they tend to choose the investment with highest returns for a given level of risk and refrain from investing if risk is higher than acceptable/favourable levels. Hence, by applying MPT, investors can choose less risky and highly risky investments at the same time in such a way that cumulative expected returns are unharmed and optimised. The risk appetite, although, of each investor differs from the other. Therefore, based on the above assumption, Markowitz (1952) believed that depending on the risk appetite, every investor aims at attaining highest possible returns for the level of risk that he/she is ready to bear. In other words, aims to build an Efficient Portfolio. Consequently, all the portfolios, ranging from high-risk to low-risk, which give optimal returns lie on the Efficient Frontier, as termed by Markowitz. Although Markowitzs MPT is still followed by many experts and investors, it also faces criticism on its unreal assumptions. MPTs assumption of volatility with figures of standard deviation or variance of an investment as its risk measurement may not always be true, especially for equities. It speaks about only a single period when actually volatility changes over time. Therefore, even if a portfolio is efficient today, it may be not be the same tomorrow. For instance, in an economic crisis or equity market crash, there is a high possibility of correlation of two assets in an efficient portfolio increasing than average. Malkiel Xu (1997) empirically prove that volatility of stocks increases with an increase in institutional ownership in the organisations. Similarly Campbell (2000) shows results of increased volatility with reduction in number of conglomerates as organisations started to narrow their focus. Lofthouse (2001) criticises MPT on the fact that it bases its calculation of expected returns, volatility and correlation on past historical figures which is inadequate especially when the aim is to build the most efficient portfolio possible. Furthermore, Bernstein (2002) notes that; MPT assumes that there is a possibility that some investments absolutely do not correlate with any of the other investments which is untrue, as each investment at some point in time correlates with one or the other investment in the portfolio. Hence, although MPT model enables investors to optimally gauge the future risk to gain highest possible returns, it is based on idealistic, theoretically decorative and practically unreal assumptions. 2.3.2 Capital Market Theory After MPT was developed, many researchers worked on the most important missing link in MPT, the inclusion of risk-free asset with zero volatility, zero correlation with risky assets and certain future returns. Tobin (1958) was the first to extend Markowitzs Portfolio Theory by introducing risk-free asset to the Efficient Portfolio. Later, Sharpe (1964), Lintner (1965) and Mossin (1966) contributed to his idea as they independently worked on similar theories. The final development is known as Capital Market Theory (CMT). It is important to note that CMT shares 3 assumptions with those made by Markowitz (1952) for MPT, as follows: Investors are always risk averse Investors decisions are solely based on expected returns and their volatility There exist no transaction costs and taxes However following are the new assumptions that CMT makes as extracted from Lofthouse (2001): All the investors have the exact same time-horizon for their investments Borrowing and lending at the risk-free rate is not restricted All the investors have the exact same expectations for correlation, risk and returns CMT states that the volatility for Efficient Portfolios that include risk-free asset; is actually the linear equivalent of volatility (risk) for the portfolios before risk-free asset inclusion. Hence these combined Efficient Portfolios lie on the straight line graph of risk and return, joining the risky and risk-free assets. This way, the optimal combined portfolio i.e. point-M in Figure.2.2, is identified at the tangency point formed by the ray starting from point-F in Figure.2.2 i.e. expected return of risk-free asset and the Efficient Frontier. It is optimal because it gives the highest possible returns for any level of risk. Therefore, it is known as Market Portfolio as it has all risky assets and the ray is known as Capital Market Line (CML). CMT advocates that all the investors should aim to build their portfolios on CML depending on their risk appetite. They could invest in risk-free asset by lending or borrow at risk-free rate to invest in Market Portfolio. Either way their p ortfolios will earn more returns than other portfolios (blue spots in Figure.2.2) on or off the Efficient Frontier, for any given risk (Brealey et al, 2007). Therefore, under the CMT the expected returns of the equity portfolio are calculated by determining the slope of CML which is the change in return for a given change in risk and intercept which is return of risk-free asset (See Equation-2). The risk is measured by the standard deviation (Lofthouse, 2001). The development of CMT was ground-breaking in the field of investment management. It clarified the effect of including risk-free asset in an equity portfolio. It formed the first equation made of ERP, risk and returns, all together. In Equation-2, change in return is market return less the risk-free return which is actually the ERP. However, this estimation of ERP is an empirical deduction (calculated from slope of CML), as early development of CMT by Tobin (1958) was just an extension of MPT. Until it was theoretically formalised by Sharpe (1964), Lintner (1965) and Mossin (1966) independently, which then led to the gradual development of the Capital Asset Pricing Model (CAPM). Hence, the CAPM is usually referenced as SLMs CAPM for Sharpes, Lintners and Mossins equal and vital contributions. 2.3.3 Capital Asset Pricing Model The CAPM is undoubtedly the most widely known model to calculate expected returns. It is a sophisticated improvisation of CMT, which in-turn is an extension of MPT and therefore builds on the relationship/trade-off between risk and returns. It is primarily based on the universal classification of risk into 2 broad categories namely: Systematic: Risk that affects almost all assets equally Unsystematic or Specific: Risk that affects only individual asset or asset class (Sharpe, 1964) The CAPM is developed through the conception of Security Market Line (SML) (See Figure.2.3) which is a ray similar to CML originating from the return of risk-free asset. However, the big difference being that SML represents the linear relationship between risk and return from individual assets and/or inefficient portfolios in respect to market portfolio, unlike CML which only represents efficient portfolios. The risk that is measured is only systematic as it is un-diversifiable and hence rewarded, unlike unsystematic risk. The standardised measure of this systematic risk is called Beta which is covariance of an asset or portfolio with market portfolio divided by variance of market portfolio. Market portfolio has Beta equal to 1. Asset with Beta higher than 1, is riskier than market portfolio and hence higher return is expected. Assets with Beta lower than 1, are less risky with lower return. The expected returns are calculated by adding return on risk-free asset to the product of ERP and systematic market risk borne by the stock (See Equation-3) (Sharpe, 1964), (Lofthouse, 2001). However, the value of Beta for individual stocks of portfolios is not known. It needs to be estimated and is hence subject to errors. Understanding the mechanics and application of the CAPM is imperative to the study of ERP, as the slope of SML i.e. linear relationship between risk (Beta) and return, equals the difference between market returns and risk-free returns which is ERP. The application of the CAPM is extremely vital in the context of ERP measurement methods as it uses ERP as an input to calculate the expected returns on a stock. The empirical studies and relevant literature related to the CAPM and its applicability in ERP estimation methods are discussed in section 2.4.3. 2.3.4 Arbitrage Pricing Theory As seen before, MPT and CMT both assess only the cumulative risk of individual assets and market risk respectively, while calculating expected future returns. Ross (1976) proposed the Arbitrage pricing Theory (APT) based on the perception that the risk of assets and their future returns vary in accordance with the risks affecting the overall economic situation. Ross believed that unsystematic risks can be curbed/nullified through diversification as suggested by MPT CAPM and hence will not affect expected returns. But systematic risks having influence on all assets cannot be diversified and hence can cause fluctuation in the expected returns. Although he did not suggest any particular factors that can trigger the systematic risk, empirical results of Burmeister et al (1997) implied the following 5 factors: Inflation Business cycle Investor confidence Time horizon Market timing APT states that; the sensitivity of assets to the unanticipated instability in the above factors varies due to which one of them can get mispriced therefore creating an arbitrage opportunity. Consequently, by selling the highly-priced asset to buy the low-priced asset, the investor can ensure profit and nearly-perfect pricing of both assets. This arbitrage can be termed as the Risk Premium for that particular factor. However, this profit is expected and not guaranteed unlike usual arbitrage gains. Like MPT and CMT, APT also has some underlying assumptions as follows: No transaction costs Short selling i.e. selling assets that are not owned, is allowed Enough assets to diversify unsystematic risks (Ross, 1976) APT has faced many criticisms on its applicability in calcul