Saturday, August 31, 2019

The Glaze Storms of 1998

Ice storms, also referred to as glaze storms, cause considerable damage every year to trees in urban and natural areas. They vary considerably in their severity and frequency. Ice storms are result of the ice formation process, which is influenced by general weather patterns. Ice accumulates when super cooled rain freezes on contact with surfaces, such as tree branches, that are at or below the freezing point (0'C). This generally occurs when a winter warm front passes through an area after the ground-level temperature reaches or falls below freezing. Rain falls through layers of cooler air without freezing, becoming super cooled. Periodically, other climatic events, including stationary, occluded, and cold fronts, also result in ice storms. The purpose of this paper is to gain a better understanding of the1998 ice storm. This paper features three main section: An introduction, the main body (damage to woodland), and finally, a conclusion. In the main body of this paper, the effect of fire and pest/disease is discussed in detail. In the conclusion, comparison is made between fire and pests/disease versus ice storm. By the end of this paper, one should gain a better understanding of the severity of the 1998 ice storm as well as other damaging agents that affect the woodland in eastern North America Ice storms are often winter's worst hazard. More slippery than snow, freezing rain or glaze is tough and tenacious, clinging to every object it touches. A little can be dangerous, a lot can be catastrophic. Ice storm in Northeastern America has been common but the 1998 ice storm was exceptional. Ice storms are a major hazard in all parts of Canada except the North, but are especially common from Ontario to Newfoundland. The severity of ice storms depends largely on the accumulation of ice, the duration of the event, and the location and extent of the area affected. Based on these criteria, Ice Storm'98 was the worst ever to hit Canada in recent memory. From January 5-10, 1998 the total water equivalent of precipitation, comprising mostly freezing rain and ice pellets and a bit of snow, exceeded 85 mm in Ottawa, 73 mm in Kingston, 108 in Cornwall and 100 mm in Montreal (Environmental Canada, Jan 12/1998). Previous major ice storms in the region, notably December 1986 in Ottawa and February 1961 in Montreal, deposited between 30 and 40 mm of ice – about half the thickness from the 1998 storm event! (Environmental Canada, Jan 12/1998). The extent of the area affected by the ice was enormous. Freezing precipitation is often described as â€Å"a line of† or â€Å"spotty occurrences of†. At the peak of the storm, the area of freezing precipitation extended from Muskoka and Kitchener in Ontario through eastern Ontario, western Quebec and the Eastern Townships to the Fundy coasts of New Brunswick and Nova Scotia. What made the ice storm so unusual, though, was that it went on for so long. On average, Ottawa and Montreal receive freezing precipitation on 12 to 17 days a year. Each episode generally lasts for only a few hours at a time, for an annual average total between 45 to 65 hours. During Ice Storm'98, it did not rain continuously, however, the number of hours of freezing rain and drizzle was in excess of 80 – again nearly double the normal annual total. One of the most appealing features of Eastern Ontario is the extensive forest cover. This is made up of woodlands of varying structure. These woodlands, as well as natural fencerows, windbreaks, and plantations of pine and poplar, dominate the landscape. Icing impacts may best be understood by treating spatially larger scales, starting with individual trees, proceeding to stands, and finally to forest landscapes. Ice damage to trees can range from mere breakage of a few twigs, to bending stems to the ground, to moderate crown loss, to outright breakage of the trunk. In the 1998 Northeastern ice storm, icing lasted long enough that many trees which were bent over had their crowns glued to the snow surface by the ice in many instances for as long as 3 weeks. Some of those trees actually erect posture after release from the snow, while many others remain bent over after 2 years. The severity of damage is generally believed to be closely related to the severity of winds following the heaviest ice accumulations. Damage varies across a range of severity and subtlety: minor branch breakage; major branch loss; bending over of crowns; root damages; breakage of trunks and in some hardwoods, trunks can be split. Depending on the stand composition, the amount of ice accumulation, and the stand history, damage to stands can range from light and patchy to the total breakage of all mature stems. Complete flattening of stands occurred locally in the Northeaster 1998 storm. In response to more moderate damage, effects on stands could include: shifts in over story composition in favor of the most resistant trees; loss of stand growth until leaf area is restored; and loss of value of the growth due to staining or damage to stem form. The term landscape refers to a ‘group† or a ‘family† of trees. I use the term vaguely because the size and composition of landscapes differ from region to region. The degree of damage is typically highly skewed by area. For example, in the January 1998 Northeaster storm, 1,800,000 ha of damage in Quebec was assessed by the Ministry of Natural Resource: very severe 4.2%, severe 32.0%, moderate 29.9%, and slight/trace 33.9% (The Science of the Total Environment, Volume: 262, Issue: 3, November 15, 2000, pp. 231-242 ). The effects on entire forest landscapes are highly patchy and variable. They also depend significantly on how landowners respond to the damage. Disturbance caused by diseases, by themselves or in conjunction with disturbance by insects, abiotic factors such as drought, fire and wind, and, increasingly, human activities, has played a critical role in the dynamics of many forest ecosystems in North America. In the predominantly coniferous forests in western North America there are considerable areas undisturbed directly by human activities. In these areas, diseases kill trees or predispose them to other agents of disturbance, resulting in gradual change in stand composition and structure. In areas disturbed by forest management practices of harvesting or exclusion of fire, increased disease incidence and severity has increased the damage caused by disease, and consequently, the rate of change. In the absence of introduced diseases in the predominantly deciduous forests of the Appalachian region of eastern North America, forests are relatively healthy. Here, forests are disturbed significantly by disease only after they are disturbed or stressed by other agents, predominantly defoliating insects and drought. In the eastern montane coniferous forest, chronic wind damage is a major predisposing factor to disease. Past harvesting practices, introduced diseases and insects, and fire exclusion have in some instances resulted in large areas of similar species and relatively similar ages that exacerbate the magnitude and severity of disturbance by disease. Fire is predominantly a natural phenomenon that burns the forest vegetation, polluting the ozone and wiping out the biodiversity. One major distinction between ice storm and forest fire is the way disaster are caused. The majority of forest fire could arguably be a result of human action and ice storm as an ‘act of god,† an act that is out of human control. Foresters usually distinguish three types of forest fires: ground fires, which burn the humus layer of the forest floor but do not burn appreciably above the surface; surface fires, which burn forest undergrowth and surface litter; and crown fires, which advance through the tops of trees or shrubs. It is not uncommon for two or three types of fires to occur simultaneously. Forest management has been able to reduce the occurrence of this event but many forest fires are out of arm†s length. Humans cause the majority of forest fires. Campers that do not put out their bond fire or campers littering lit cigarette bud are responsible for such an action. Natural occurrence such as lightning could spark a forest fire but the probability is small compared to human action. The convention way of putting out or reducing the spread of forest fire has been airliners. These airliners are filled with gallons and galloons of water. With limited capacity, these airliners fly above the flame and deposit galloons of water. For the purpose of this paper, deforestation simply means the lost of trees where the lost of trees exceeds the level of sustainable development. One of the major effects of forest fire is the burning of carbon dioxide into our atmosphere. This eventually creates a greenhouse affect and global warming. The effect damages our ecosystem as well as reduces one of Canada†s precious natural resource. Many projects, both from government funding and corporate sponsors, have done a good job increasing the awareness and risk related to deforestation. Pests directly affect the quantity and quality of forest nursery seedlings and can indirectly cause losses by disrupting reforestation plans or reducing survival of out planted stock. The movement of infested stock can disseminate pests to new areas. Since control of nursery pests may be based on pesticide usage, pest outbreaks may lead to environmental contamination. Woodland damage caused by livestock is a well-documented, yet persistent, forest health problem. Soil compaction, root disturbance and trunk/root collar damage caused by livestock reduce the vigor of trees. This paves the way for armillaria root rot, borers and other opportunistic organisms. Livestock also destroy the forest under story (reproduction), which hastens soil erosion and limits the future productivity of the site. The resulting forest decline reduces the quality, value and longevity of current and future trees on the site. Eliminating livestock from woodlands is the first step toward a healthier, more productive forest. As mentioned earlier in this paper, ice storm is a natural phenomenon caused by nature whereas forest fire are a result of human actions and preventable. One of the major differences between fire and ice storm is the rate of damage. Forest fire has a direct impact on the woodlands by changing the diversity of the landscape. Forest fire wipes out an entire landscape of trees causing a release of carbon dioxide. This ‘in lieu† effect results in global warming as well as greenhouse effect. The release of carbon dioxide has a long-term effect to our ecosystem. Carbon dioxide is trapped in our ozone layer making airways less preamble. This trapping effect eventually radiates heat causing global warming. The long-term effect is hazardous and changes our biodiversity. Ice storm has very little affect to our ozone layer. Damage to woodlands as a result of ice storm is concentrated within that area. Ice storm does not spread like fire does so areas that have been hit by an ice sto rm affect woodlands Pests and disease slowly eroded the quantity as well as quality of woodland. Infected woodland slows the development of growth by eroding the soil limiting the production of trees. Pest control and good forest management could improve the quality and well as productivity in these areas. Pests and diseases cause a slow change in biodiversity. As the woodland become infested, animals feeding from leafs and branch find it less desirable, eventually leaving the area in search of more suitable woodland. Similarly, forest fire, pests and disease spread but at a much slower rate. These agents infect the trees, eventually penetrating the roots and moving on to the next host. As mentioned previous, ice storm does not spread, rather the effect stays within the area. To conclude, fire and pests/disease are similar in the way these agents spread and infect their host. The preceding sentence can be best thought of as a virus infected it†s host as an analogy. Fire spreads at a much faster rate than pests/disease and the impact are instant. Both of these agents have long-term effect, which does not work in our favor. Ice storm affects the area it hits and will not spread. Furthermore, ice storms are predictable whereas fire is not since the cause of fire is human mistake and is hard to predict. Ice storms are not preventable but human actions can be prevented. The potential of damage from fire is far more severe than that of ice storm. We must increase the awareness to ensure that our woodland remains healthy and protect our ecosystem.

Friday, August 30, 2019

Economics Internal Assessment Essay

The article discusses the effects of a severe flood in the areas of Thailand to the rice production. The rice production falls to 22 from 23 million metric tons. Supply is quantity of goods and services that producers are willing and able to produce at a given price and time period. The decrease in supply of rice in Thailand is shown by the following graph: The graph above shows that the effect of flood in Thailand decreased the rice crop’s supply. The supply curve shifted to the left from S1 to S2, moving the equilibrium point from point E1 to point E2. The equilibrium price then rises from P1 to P2 and the equilibirum quantity moves to the left by 1 million metric ton. The increase in the price of rice brought advantages to the country. One of it is the increase in the total revenue of rice producers. Rice is a commodity good where the price elasticity demand is inelastic. Price elasticity demand is the responsiveness of quantity demand to the change of price. Inelastic refers the condition where the quantity demanded is less responsive to the change in price. The following graph shows an inelastic demand curve of rice market. As the total of producer’s gain is greater than the total number of loss, the producer receives an advantage of higher revenue from the tragedy in Thailand. Total revenue is the result of the multiplication of the quantity sold to the price of the product. Despite the advantage, the rise in price of rice has brought disadvantage to the customers. As the quantity supplied of rice is decreased, therefore they are unable to buy a larger quantity of rice and as its price goes up it increases their portion of real income spent on rice, as it is a staple food. Thus, it results to the opportunity cost of decreased remaining real income that could be spent on other goods. Opportunity cost is the cost of the best alternative good sacrificed when a choice is made. Due to opportunity cost, the producers of non-commodity products would then be harmed as the quantity demanded for their product falls and therefore their total revenue decreases. To survive, producers will raise the price and thus harm the consumers. Thus, a solution shall be made to avoid further loss of customers. One of them is to apply a maximum price of rice in the country. Maximum price is the price set up by the government below the equilibrium price in order to help the customers, due to the high prices of certain commodity products. As seen on the graph, the maximum price is set at Pmax, below the equilibrium price of Pe. With the imposition of the policy, the customers are able to purchase rice at low price. However, with rice supplied at the maximum price, customers are demanding rice at point QD, while the quantity supplied is at Qe, which leads to a shortage. Shortage is the excess demand of goods and services. In consequence, to satisfy the demand of customers, black market might arise. Black market is a situation where the product is sold illegally at a higher price than Pmax. The seller of the rice might also apply unfair practices to customers such as rationing, where the amount of product is shared equally among customers, creating limitation of consumption. Another solution to avoid shortage is to import the supply of rice from overseas. Import is when a country purchase goods and services overseas. The supply curve then shifts to the right and Pmax becomes the new equilibrium price, thus black market and rationing would not arise. Though it would still bring disadvantage to the domestic producers of rice. The imported rice would be a new substitute good to the high-priced domestic rice. The quantity demanded for locally produced rice will decrease and thus the total revenue of the local producer decreases. Another disadvantage is the occurrence of trade deficit, because the country’s import increases and we assume the export remains constant. Trade deficit is the negative balance where the country’s import is greater than it’s export. Looking at the advantage given by maximum price solution, it is more beneficial for Thailand to increase the producer’s revenue rather than to prevent illegal practices. Thus, implying maximum price as a solution is more effective compared to importing supply of rice.

Thursday, August 29, 2019

Poverty in canadian society Term Paper Example | Topics and Well Written Essays - 1750 words

Poverty in canadian society - Term Paper Example There is another traditional poverty measure criteria based on basic needs poverty measure, recommended by Fraser Institute. As per this measure poverty has reduced greatly in the past 60 years, as reported 4.9% in 2004 (Wikipedia para 2). Indicators of poverty have changed with the changing times. In comparison to middle class Europeans, the â€Å"poor† America possess larger homes; more than 70% have a car; about 20% have more than one transport medium; about 60% poor have two or more television sets. The traditional definition of poor denoting those who have deficiency of food, shelter, and clothing holds minimum authenticity, therefore, requires redefining (Bauman 6). Before considering poverty in relative terms we need to find the parameters to compare, what are the standards, global or the highest known standards in Canada and Europe, as examples from third world could be the worst on absolute poverty (Segal 7). Milton Freedman, one the great post-war Nobel Prize winning conservative economists put the case this way: â€Å"The programme should be designed to help people as people not as members of particular occupational groups or age groups or wage-rate groups or labour organizations or industries† (Segal 18). ... Canada could not meet the poverty targets set by the United Nations in 1996, the International Year for the Eradication of Poverty, as reported by the National Council of Welfare. Since1990s such figures have been presented that indicate a rise in the number of poor people in Canada. Even at the height of economic boom, rate of downfall in poverty was slow. There is no unanimous opinion on it, as all depends on how we define and measure poverty. Some indicators to the rise in poverty include peoples’ increasing dependence on food banks and emergency shelters. Between 1989 and 2000 the use of food banks had increased by 96%. At a boom period of 1997-2000, the food bank use increased by 9.4%. Housing has become a big issue for poor people. Canadian youth are the leading community in the matter of homelessness but when it comes to measuring poverty, computations on poverty lines are not unanimous (deGroot-Maggetti 1-3). So far as poverty lines are concerned, in Canada there is no dearth of poverty measures. The federal government has a number of poverty indicating measures. Besides, the social councils, organizations, and independent researchers have evolved their own measures. Yet provincial social help rates offer another set of poverty lines. Absolute measures stress on basic human needs while relative measures point towards the insufficiency of standards socially accepted above poverty line (deGroot-Maggetti 3). Due to different measuring standards of poverty, the term has become somewhat ambiguous. A further research into the causes of poverty in Canada can help in making the meaning clear. Many factors are responsible for poverty although there is difference in a â€Å"factor† and a â€Å"cause†. A â€Å"cause† adds to the emerging of an issue such as poverty

Wednesday, August 28, 2019

Japan politics and the FDI Essay Example | Topics and Well Written Essays - 750 words

Japan politics and the FDI - Essay Example There were multiple parties that were registered to participate in the last general election. These parties included the (LDP), the (DPJ), the (JRP) and (NKP) among others. Under different leadership styles and ideals, all the political parties reason from different platforms/manifestos. Politicians espoused to the ideologies of the different political parties. They conducted their campaigns with varying manifestos that were commonly identifiable to their visions and missions towards the general governance of the republic of Japan. These platforms entailed what individual candidates or parties would do to the people when elected to the governance. This is a common scenario to all political struggles in all nations. However, though the disparities of the parties are evident, close analysis show some likeness and difference between the competing political parties. Economic analysis of the similarities and differences reveal that they have impacts on foreign direct investments. Today’s political landscape of Japan is dominated by political party manifestos. These platforms have changed the politics in that they have now changed into principle and policy oriented. The political campaigns now give the general public the position to evaluate the political parties past performance as regards to the manifestos provided and are able to judge the individual candidates based of the visions advocated by the manifestos. This is one of the similarities between competing parties in Japan. However, much of the manifesto strategies has been criticized as only paper work and only serves to win the peoples votes. They are often designed to the catchy edge on why one candidate is better elected than his/her rival. According to the FX trade magazine, January-March 2013 edition, Shinzo Abe, the leader of the LDP was quoted through an interview with the Wall street journal to have the persistent â€Å"deflation problem† in Japan as a priority of his governance co uld he win the forth coming elections. He argued that with a good spending plan, he would be in a position to curb deflation and this would go a long way in restoring the investors’ confidence (Anon 45). He argued that the bottom line in stabilizing the economy of Japan was through appreciating the value of the yen and had a 2% inflation target. By revaluing the yen, investors confidence and would boost better relations with the investors both local and foreign. The JRP party pledged to ensure minimal corporate dependence by the central bank and minimize the income taxes in the way to boost investment and the earn investors confidence. The party also promised to eradicate nuclear power production by 2030, if elected into office (Martin, para 10). This in itself had economic edge in that western countries that in the past never regarded investing in the country would now be won into the country. Japan future party’s point on economic perspective was to have an overhaul cut of the government expenditure before imposing a tax cut. Led by the founder, Yukiko Kada the party also intends to reduce the reliance on the central government to ease the wasteful bureaucracy (Koh, para 6). By and large, a common feature to all the parties and their platforms was the zeal to restore sanity to the deflation that has been challenging all efforts to economic development of the country since the Second World War. Different regimes of governance have always tried to revalue the country’s currency in efforts to better the lives of the citizens. Different policies and strategies have been proposed and tried thou the currency is yet to rise to the rightful value. The parties also commonly pledged to have the restoration of the image of the country as regards

Tuesday, August 27, 2019

Popular Culture Essay Example | Topics and Well Written Essays - 500 words

Popular Culture - Essay Example This paper outlines that the popular culture is usually observable in such area including clothing, cooking, sports and recreation and also consumption and entertainment. On recreation we can view golf playing as a popular culture practiced by the rich in the society. Today cultural activities are segregated and there exist restrictions that are formal and also informal. Restrictions apply to those that are not part of that culture and may be tempted to join that culture. This paper highlights that some cultural activities are highly restricted by the laws of the society, an example is beer drinking, beer drinking for example is prohibited in Saudi Arabia and there exist a law that will prosecute those found drinking beer. The drinking of beer therefore is a popular culture among the masses of many societies and this culture is proposed by the mass media through advertisements of these brands, the culture is restricted in some societies like Saudi Arabia formally. There also exist informal restrictions to cultural practices, these informal restrictions are those restrictions that do not exist in writing but are termed as norms in the society, and they do not exist in writing but are termed as rules governing behavior. These informal restrictions include the expected reaction by the society, the society has informal ways in which to discourage behavior example a person doing wrong may be isolated by the society, for example the case where people have tattoos all over their body, this is a popular culture among the young but in some society the making of such decorations on the skin may lead to one being isolated and disowned by the society. This is an informal way in which this popular culture is restricted by this society and it is helpful in restricting such cultures.

Monday, August 26, 2019

SUMMARY South African Opposition Picks New Voice Essay

SUMMARY South African Opposition Picks New Voice - Essay Example To entice black voters, the party will have to strive for new policies that address the needs of low income young blacks. Because the next national vote is not until 2014, Ms. Mazibuko must try to place pressure on the government through parliamentary sessions. The previous leader of the party, Athol Trollip, did not appeal to the wider voting public because he is a 47 year old white farmer. In choosing a new type of leader, the Democratic Alliance is attempting to turn over a new page in politics and offer themselves as a viable alternative to the ruling African National Congress. Although the party will focus specifically on blacks’ issues, it is a party that will stand for South Africans are all races. Historically, the Democratic Alliance has been thought of as a white-dominated party. This decision to appoint a black leader is a step into the future and will hopefully result in the Democratic Alliance making more of a difference on the South African political

Sunday, August 25, 2019

Artist Scott Joplin Research Paper Example | Topics and Well Written Essays - 1500 words

Artist Scott Joplin - Research Paper Example No rag composer would rival Joplin’s dreams and hopes for the music—dreams that resulted in the creation of a ballet, two operas, and other creations that directly defied the uncultured status of the rag expression (Gioia 21). Even though Joplin’s bolder works did not gain the popularity or recognition during his lifetime, his works are now prominent because of his grand ambitions, as well as his single-minded faith in ragtime as a major musical genre—a faith that, years after his death, became legitimized by his late recognition as a great American musician. Scott Joplin was born on the 24th of November 1868 in Texarkana, Texas (The Columbia Encyclopedia 53). He belongs to a family of musicians—his mother played the banjo, while his father played the violin. The banjo may have had a great influence on the musical creativity of Scott: the syncopated cadence of the African-American banjo music of the 19th century is without a doubt a forerunner of th e subsequent piano rag genre (Cardell 533). Scott showed his interest in the keyboard early on. He frequently went with his mother to her workplaces and would innovate and play the piano. As a teenager, Scott was already a professional pianist, with offers to play at social occasions and churches in the boundary of Arkansas and Texas. Afterward he became a music teacher and accompanied a vocal quintet that sang and played all over the area (Gioia 25-26). During this time, Scott attempted to make his first composition. Scott transferred to St. Louis in the 1880s, where he was paid as a pianist and a soloist in bars. He also played for a band. The ensemble job gave Scott the chance to enhance his talents in arranging that would eventually hit on their highest point in compositions for his two operas (Berlin 17). Scott lived in St. Louis for several years, but he travelled often throughout these years. His attendance at the World’s Columbian Exposition in Chicago in 1893, a very important exposition that drew the attention of the greatest composers of the period, could have been specifically momentous (Tawa 137). Even though rag composition had not yet been made public, it was in fact extensively performed at the fair, although most frequently at the fringes of the exposition grounds, where African-American composers performed; the more prominent spots were reserved for White musicians. In the 1890s, Scott moved to and lived in Sedalia, where he studied composition and rhythm at the George R. Smith College for Negroes (Gioia 24). Scott composed the ‘Maple Leaf Rag’ in 1897, a creation that would eventually become the most celebrated ragtime music of its period (Haskins & Benson 111). However, it was not until a few years afterward that John Stark made the composition public, and in the initial year merely a few copies were sold. Nevertheless, the ‘Maple Leaf Rag’ began to gain popularity in 1900, sooner or later becoming the first musical composition to sell roughly a million copies (Haskins & Benson 101). Aspiring pianists may have encountered difficulties navigating the rhythmic and technical complexities of Joplin’s popular rag; numerous musicians undoubtedly bought the composition and struggled with its difficult syncopations. Looking back, it can be discerned that the ‘Maple Leaf Rag’ simply alluded to the entirely of Joplin’

Saturday, August 24, 2019

CCNA Basic (Assignment Booklet 2) Essay Example | Topics and Well Written Essays - 1000 words

CCNA Basic (Assignment Booklet 2) - Essay Example By using the 255.255.255.201 and subnetting the network you get more IP addresses and it is easier than requesting two more address blocks. This enables each network to have its own subnet to house their own devices in order to keep the network secure and ensure there is no collision of traffic. Question 3: Using 10.0.0.0create a subnet mask for the 900 subnets. Identify 198th and VLSM for a further 40 subnets. 10.0.0.0 255.0.0.224 host range 1 to 30 10.0.0.32 255.0.0.224 host range 33 to 62 10.0.0.64 255.0.0.224 host range 65 to 94 10.0.0.96 255.0.0.224 host range 97 to 126 10.0.0.128 255.0.0.224 host range 129 to 158 10.0.0.160 255.0.0.224 host range 161 to 190 10.0.0.192 255.0.0.224 host range 192 to 222 10.0.0.224 255.0.0.224 host range 225 to 254 10.0.0.256 255.0.0.224 host range 258 to 287 10.0.0.289 255.0.0.224 host range 291 to 320 10.0.0.322 255.0.0.224 host range 324 to 353 10.0.0.355 255.0.0.224 host range 357 to 386 10.0.0.388 255.0.0.224 host range 390 to 419 10.0.0.421 255.0.0.224 host range 423 to 452 10.0.0.454 255.0.0.224 host range 456 to 485 10.0.0.487 255.0.0.224 host range 489 to 518 10.0.0.520 255.0.0.224 host range 522 to 551 10.0.0.553 255.0.0.224 host range 555 to 584 10.0.0.586 255.0.0.224 host range 588 to 617 10.0.0.619 255.0.0.224 host range 621 to 650 10.0.0.652 255.0.0.224 host range 654 to 683 10.0.0.685 255.0.0.224 host range 687 to 716 10.0.0.718 255.0.0.224 host range 720 to 749 10.0.0.751 255.0.0.224 host range 753 to 782 10.0.0.784 255.0.0.224 host range 786 to 815 10.0.0.817 255.0.0.224 host range 819 to 848 10.0.0.851 255.0.0.224 host range 853 to 882 10.0.0.884 255.0.0.224 hos

Kudler Fine Food's Marketing Strategy and Tactics Essay

Kudler Fine Food's Marketing Strategy and Tactics - Essay Example With all these aims in mind the company presently aims at using market research. This paper aims at discussing the importance of marketing research for the company and also to identify the areas where additional market research is needed. The paper will also focus on the competitive intelligence which will assist in development of marketing strategy and tactics for the company. Changes in any organization are one of the most difficult things to accomplish and marketing research in any company tends to always bring about changes (McNamara, 2006). Market research plays a very important role in businesses and assists businesses study the changes in the markets to effectively be able to accomplish the changes being intended in the company. It is essential to note that the marketing research is very beneficial in assisting the companies develop the marketing strategies and tactics. The benefits of the marketing research are as discussed below: a) Detailing the Constraints: Kudler fine Foods will gain a strong guidance from the marketing research and will be able to focus on the constraints which are an imperative part of all decision making. b) Marketing Action: The market researches provide the company with the required data for the development of the strategies and tactics (Kotler, 1999). This information proves to be invaluable for the accuracy of the tactics and development of the strategies. c) Customer’s views: Marketing research provides the company with a chance to identify the customer’s responses and views. This allows the company to make more informed decisions in terms of the marketing and also in regards to the development of new products (Aksel, 2007). Evaluating the customer needs constantly allows the company to develop products and services to accurately meet the needs of the customer and to successful cater to the target customers. d) Estimation

Friday, August 23, 2019

Kofi Annan, in 2000, remarked, It has been said that arguing against Essay

Kofi Annan, in 2000, remarked, It has been said that arguing against globalization is like arguing against the laws of gravity. Discuss the full implications of his claim - Essay Example ase, globalization refers to the increasing interconnectedness of places and people due to advancements in information, communication, and transport technologies, which have precipitated a wave of cultural, economic, and political convergence. The laws of gravity, on the other hand, propose that all particles in the universe attract each other with a directly proportional force to the product of the particles’ masses (Gondhalekar, 2011: p21). This condensed law of gravity has been proven scientifically to be inevitable and irrefutable. Based on Kofi Annan’s analogy, globalization, despite being a controversial concept, is inevitable and irrefutable. Perhaps the most important evidence as to the inevitability and irrefutability of globalization is the United Nations. Indeed, the biggest function of the UN is to act as an international forum for the organization of dialogue and meetings where government representatives from around the world can come together to adopt shared values and standards (Kunkel, 2014: p240). As globalization has hastened the transfer of power from state actors to non-state actors, non-state actors like the UNHCR, UNDP, and UNEP have become increasingly influential on transnational issues. Today, there are growing calls for the strengthening of the UN in the face of new challenges like human rights violations, humanitarian crises, environmental and health concerns, and armed conflicts. Never has the UN been called on to solve so many challenges, which is evidence of states becoming more globalized. The UN, as an organization where different state and non-state actors can dialogue, has provided an avenue wh ere governments form partnerships and relationships, in turn accelerating the pace of globalization (Rasche & Gilbert, 2012: p108). Another issue where the phenomenon of globalization has greatly affected is the relationship between EU member states, which have witnessed increased integration since the end of WWII. However, apart from

Thursday, August 22, 2019

History and myth Essay Example for Free

History and myth Essay The Worlds Wife revises fairytale, history and myth and reworks it into contemporary, feminist fables. With reference to three of the poems in the volume examine the techniques employed by Duffy in writing contemporary feminist fables. Duffys volume The Worlds Wife is a collection of dramatic monologues where Duffy becomes a ventriloquist inventing the words, which famous, silent, wives from history or myth might have said. Her use of humour and play on clichi s creates a collective female voice where dominant male characters are being criticised. Duffy reworks contemporary feminist fables and adopts different personae by employing different techniques, which are particularly displayed in her poems, Mrs Midas, Mrs Lazarus and Mrs Aesop. Duffys use of witty humour in the poem Mrs Aesop allows her to condescend the male counterpart, by turning his famous fables against him and questioning his manhood. On the contrary, Mrs Lazarus portrays a more emotional persona grieving over her husbands death, where her other half fails to consider the impact of his return. Similarly, in Mrs Midas, the male character is overcome by greed, blinding his ability to comprehend the repercussions of his actions. The metaphorical autobiographies allow Duffy to adopt a variety of dramatic personae and assume a multiplicity of voices, which portray issues and views sensitive to her own. She explores the notion of the self in relation to the other, particularly in the poem, Mrs Midas. The poet is able to present a wide range of emotions through the practical persona that feels a sense of exasperation due to her husbands selfishness. The sensual qualities of the persona are highlighted through the use of soft sounds, breath brow, and my fingers wiped the others glass. She is then depicted as multitalented, especially in comparison to her husband who was standing under the pear-tree snapping a twig. His pointless and ridiculous activity belittles his usefulness and thus increasing his wifes, as it does not require much talent to carry out such an activity. The persona undertakes an anecdotal approach, principally when the tragedy is building up, belying the serious concern, I said and What in the name of God is going on? show the use of colloquial language, which help the personas voice emerge. The phrasing used throughout the poet emphasizes her practicality and ability to make sense out of any situation, I served up the meal and So he had to move out, illustrate that she is not theatrical, but is calm and logical, which is a comparison to her partners childish and immature behaviour, he toyed with his spoon. The persona is able to rise above him, assert her authority and her use of bitter sarcasm introduces comedy to the poem. Duffys use of the clichi , which is commonly present in her poems, is used to show how worthless he has become and how ashamed and fearful she is for him, as he is a fool who could not think beyond his short-term greed. Similarly, Mrs Lazarus, also has to face the consequences of her husbands return after she finally manages to deal with her grief over his death and move on. The dramatic persona created in this poem is extremely loyal to her husband and devastated at the fact that she has lost her other half. Howled, shrieked, clawed and one empty glove reinforce the imagery of suffering and grief-stricken state. She is a persona very expressive of her emotions and goes through the entire pain of her loss, even to the extent where there are images of suicide because of what she is feeling, double knot round my bare neck. The alliteration of soft, slept.. single.. stuffed and harsh sounds, gone gutted glove, bring emphasis to the range of her emotional suffering. As her memory of him and grief is receding, she develops a more practical, factual tone in her diction, Then he was gone, showing that she has finally moved on. When her husband returns, her phrasing and diction changes and it begins to sound more harsh and bitter, rotting graves slack chew, as a reflection of the fact that he is insensitive to her emotions, despite everything she has been through.

Wednesday, August 21, 2019

Crime And The Impact On Modern Society

Crime And The Impact On Modern Society The threat and fear of crime are constant concerns that impact many people in modern society. The safety of schools and communities are usually indicated by crime rates, and are justifiably major factors in choosing where to reside. Research denotes that juveniles are involved in numerous crimes each year, as perpetrators who are subjected to legal intervention for status offenses such as running away, school truancy and curfew violations and as victims (Regoli, Hewitt, Delisi, 2007). Literature review reveals that there are official measures of juvenile crime which include those by police, the courts, and corrections agencies; and unofficial measures of juvenile crime such as self-report and victim surveys, that try to give a more complete description of the true extent of juvenile crime (Schmalleger, Bartollas, 2008). This paper will discuss several methodologies of official and unofficial measurements of juvenile delinquency and the identifiable problems with these types of data collections. Keywords: Uniform Crime Report, National Incident-Based Reporting System, self-reporting Criminologists for years have recognized that there are major problems in defining and measuring juvenile delinquency. The first is the legal definition that applies to youth who have been officially labeled in juvenile court. Legal definitions vary by time and place, making comparisons difficult because they are not uniform in all jurisdictions with respect to age of the prosecution; thus they tend to provide an unrealistic picture of the extent and nature of delinquency since they deal only with youth who are caught and processed (Regoli et al., 2007). Behavioral definitions in contrast to the legal definition can sometimes provide a more accurate picture of the extent and nature of delinquency and the characteristics of the offenders and victims. By using behavioral definitions, juveniles who violate statutes are seen as delinquent whether or not they are officially labeled (Regoli et al., 2007). The results have the appearance of delinquency being evenly distributed across social class and more frequent than official statistics would lead us to believe; thus showing a highly noted problem of relying on self-reporting processes and the difficulties in collecting accurate data (Regoli et al., 2007; Schmalleger, 2009). Measurement is not new to the juvenile justice system. Too often data collected by juvenile justice agencies have been unrelated to outcomes, and seldom allowed the public to assess performance in a meaningful way (Schmalleger, Bartollas, 2008). I suggest that this one of the reasons information does not completely help juvenile justice systems and organizations determine impact or cost-effectiveness of their interventions. Data is most useful when it provides input to juvenile justice professionals regarding public awareness and support, and can provide citizens and other government stakeholders with a sense of what the juvenile justice systems and agencies are really accomplishing or trying to accomplish. Official Measuring of Juvenile Delinquency Even with all the debates about the methodology of juvenile delinquency measurement, official crime statistics are considered the most accurate measures of crime and are often used in the news media and by justice agencies. This data is usually compiled by police, courts, and corrections agencies. The Uniform Crime Report (UCR) a program which began in 1929 and provides this type of data on the national and local levels, and track occurrences of eight specific crimes including the locations and frequencies of each (Lynch, Jarvis, 2008). This useful information is collected by the Federal Bureau of Investigation (FBI) from law enforcement agencies across the country, and presents descriptive statistical, historical profile of violent juvenile crime in America based on the percentage of all arrests (Lynch, Jarvis, 2008). Another official measure for data collection of juvenile crime is the National Incident-Based Reporting System (NIBRS). This system was developed in 1988 by the federal government to address some of the shortcomings of the UCR, and is generated from the records management systems of federal, state and local agencies (Regoli, et al., 2007). The NIBRS which collects information on every arrest and incident was intended to be a broader crime reporting system in comparison to the UCR program; and it gives much greater details on specific crimes because it differentiates between crimes that are attempted and crimes that are completed (Schmalleger, 2009). Proponents for official measurements have recently argued that these measures show validity for certain crimes; any problems tend to be stable over time allowing trends and patterns to surface; there is easy to access to the data and relatively inexpensive; they allow for city and regional trend comparisons; and they provides detailed information on reporting patterns, who is arrested, and homicides (Lynch, Jarvis, 2008). In contrast, opponents have raised the issues that the reports do not capture unreported crime because under or over reporting by law enforcement often referred to as the dark figure of crime; and as it relates to juvenile crimes the number of arrests is not equal to the actual number of youths who committed crime, and group arrests overestimate juvenile crime (Lynch, Jarvis, 2008). Un-Official Measuring of Juvenile Delinquency Even though most of the fundamental problems with official crime statistics had been identified before the end of the nineteenth century, including the major problem of the dark figure of unknown crime, it was not until the mid-twentieth century that systematic attempts to unravel some of the mysteries of official statistics were initiated (Regoli, et al., 2007). Turning to data sources outside of the official agencies of criminal justice, unofficial crime statistics were generated in order to explore the dark figure of crime not known to the police, and to create measures of crime that were independent of the official registrars of crime and crime control, which many felt would address more validity and reliability issues in the measurement of crime (Doerner, Lab, 2005). One un-official data collecting measure used for juvenile delinquency is self-reporting. These reports are confidential questionnaires administered to samples of youth who voluntarily report on their own involvement in delinquent activities, which sometimes provide a more complete picture of juvenile delinquency (Webb, Katz, Decker, 2006). They however are not error free. These measures use population samples that arguably are small, and it has been suggested by some criminologists that they are not representative of juvenile offenders as a whole (Webb et al., 2006). Recently, it has been proposed by some researchers that victim surveys recognize the inadequacies of official measures of crime, particularly the apparently substantial volume of crime and victimization that remains unknown to, and therefore un-acted upon by, criminal justice authorities (Doerner, Lab, 2005). The National Crime Victimization Survey (NCVS), a survey sponsored by the federal government and has been collecting data on personal and household victimization since 1973 (Doerner, Lab, 2005). It was designed with four primary objectives: to develop detailed information about the victims and consequences of crime; to estimate the number and types of crimes not reported to the police; to provide uniform measures of selected types of crimes; and to permit comparisons over time and types of areas (Doerner, Lab, 2005). In general, victimization surveys have the same problems and threats to validity and reliability as any other social-science survey. Ironically, there is a double dark figure of hidden crime that is not reported to interviewers in victimization surveys designed to uncover crimes not reported to the police (Doerner, Lab, 2005). Such incomplete reporting of victimization means that victimization surveys, like official data sources, also underestimate the true amount of crime, and this then suggests that the discrepancy between the crime rate estimates of the victim surveys and the UCR may be even larger than reports indicate. A noted strength of victim surveys is that most crimes included in the questionnaire are F.B.I. index crimes; but research also reveals that two index crimes (murder and arson) are not included in the survey, though many other important crimes are measured in the victimization surveys (Doerner, Lab, 2005). It is fair to argue that many times the results from thi s type of data collection show that the victimization statistics are somewhat limited in their representativeness and generalizing ability. Conclusion Debates have been heated over the last few decades on the proper way to measure delinquency. Research reveals that there are three major sources of data that have been used, self-reports of delinquent behavior, victimization surveys, and official accounts (e.g., arrests, court records) (Regoli et al., 2007). These sources of data results do not always agree, and studies have shown that certain methodologies such as survey-reports show weaker associations between social status (e.g., poverty, race, gender) and delinquency than official records (Regoli et al., 2007). Proponents for methodological measurements argue that these sources of data yield reasonably similar patterns when the object of inquiry is serious and persistent delinquency (Schmalleger, 2009). I suggest there is still a need for more methodologies to aid in the challenges of prevention and recidivism juvenile crime.

Tuesday, August 20, 2019

Water And Wastewater Analysis Focusing On Formaldehyde Environmental Sciences Essay

Water And Wastewater Analysis Focusing On Formaldehyde Environmental Sciences Essay Formaldehyde (FA) has been widely used in wood, paper and textile industries as well as in the production of a number of chemicals and for the preservation of biological material. It also present in almost all common foods and its estimated that adult dietary intake is 11 mg/day. Occasionally, it is used as a disinfectant to disinfect water filters. (ADWG, 2004) FA can be toxic allergenic and carcinogenic to human beings (Lyon, 2006). Several epidemiological studies of occupational exposure to formaldehyde have indicated an increased risk of nasopharyngeal cancers, leukemia and eye irritations (OSHA, 2008). The International Agency for Research on Cancer has concluded that FA is probably carcinogenic to humans (IARC 1987). FA may be present in water through industrial effluents, ozonation of naturally occurring humic materials, contamination by accidental spills and overflows as well as deposition from the atmosphere (ADWG, 2004). A study showed that the FA concentrations in rainwater are expected to be up to three orders of magnitude higher than in surface water, which indicated that atmospheric deposition is a significant source of FA in aquatic systems (Kieber et al., 1999). Generally, the concentration of formaldehyde in water is very low which has a low environmental risk to human and organisms. However, when accidental spills or overflows happened, chemical analyses and monitoring programs are needed. 1.1 Formaldehyde in drinking water FA enters in drinking-water mainly from the oxidation of natural organic matters such as humic materials during ozonation (Glaze et al., 1989) and chlorination (Becher et al., 1992). Leaching from polyacetal plastic fittings in which the protective coating has been broken can sometimes be one of the resources of FA in drinking-water (IPCS, 2002). According to Australian guideline value, the concentration of formaldehyde in drinking water should not exceed 0.5 mg/L (ADWG, 2004). 1.2 Formaldehyde in wastewater FA has been used in many industrial activities as a key chemical. In organic synthesis industry, the synthesis of special chemicals such as pentaerythritol and ethylene glycol used FA as one of the agents. In addition, FA is essential in production of resins, textiles, paper products, medicinal products and drugs (Khiaria et al., 2002). Therefore, effluents arising from these industrial applications may contain significant amounts of FA which is needed to be determined and treated. 1.3 Chemical analysis of formaldehyde in water and wastewater Since the concentration of formaldehyde in water can be occasionally high which may be potential risk to human health, we should conduct some methods to measure the accurate concentration of it. The chemical analysis of formaldehyde can provide meaningful information on the quality of water therefore actions can be taken immediately to ensure that water suppliers provide consumers with water that is safe to use and meet the public recreational and aesthetic requirements if changes occurred. Advice on sample collection In sample collection, the sampling site, time and weather conditions are needed to be considered to obtain a volume of water which can be the representative of the water body. Before it is analyzed in laboratory, we should try to keep it in such a manner during store and transport processes, sometimes preservatives can be added in order to minimize any changes that may occur (Private Water Supplies website). The essential steps in sampling program are shown below (From unit 5, lecture notes). Problem Definition Formaldehyde Sampling Program Design Sample Preparation Chemical Analyses Field Sampling Reporting Data Analysis 2.1 Sample containers Formaldehyde belongs to volatile organic compound, therefore, its recommended to use 40mL brown glass vial or transparent glass vial with aluminum foil covered outside as the sample container to prevent it from releasing to air or deteriorating after exposing to light. The cap must have teflon-lined septum. The polypropylene screw caps should be used instead of typical phenolic resin caps due to the possibility of sample contamination from FA (US EPA 1998). In addition, when taking samples, we should use pre-cleaned bottles that are free from volatile organics (Standard operating procedures for water sampling methods and analysis, WA, 2009). 2.2 Sample collection Sample collection is very important in determining the safety of water, so its essential to ensure that the samples are representative, reliable and full validated. For complicated and unstable water quality such us wastewater effluent, sample collection should also cover the random and regular variations in water quality as well as the fixed conditions. 2.2.1 Types of Sampling The types of sampling include grab sampling, composite sampling, flow-related sampling, automatic sampling and continuous monitoring. Each method has its own characteristics and suitable for different water body and sampling purpose. For drinking water, we use grab sampling method. For grab sampling, all of the test material is collected at one time. So the grab sample can only reflect the water quality state at a particular site and time, and then only the sample was properly collected can it represent the water body we concerned (Norwalk Wastewater Equipment Company website). Grab sampling has some advantages. For example, some specific type of unstable parameters such as VOCs, chlorine residual and nitrites in water treatment plant can be effectively analyzed. Sometimes, grab sampling can also be conducted for pH, temperature and DO monitoring (NWEC website). For drinking water, the water was well mixed, stable and generally free of contamination. Therefore, grab samples can already be good representations of the water quality. In addition, this method is very common, easy and low capital cost. For wastewater, we use composite sampling method. Composite sampling is another sample collection technique which consists of many individual discrete samples that have been taken at regular intervals over a period of time. Therefore, the collected samples can reflect the average performance of water quality during the collection period (NWEC website). Wastewater treatment plants receive unfixed and variable amounts of sudden increased waste flows from industries and households during a day followed by intermittent periods of no flow (NWEC website). Analyzing a single grab sample of effluent at a fixed time and site can introduce some bias and cannot reflect the real varying flow patterns in effluent outlets. Therefore, composite sampling method is more plausible for evaluating the holistic performance and state of wastewater quality. 2.2.2 Sampling sites Drinking water sampling For drinking water sampling, we can either take a sample from a customers tap, or storage tank or some representative places. From a tap Choose a tap which is most frequently used. Any external fittings such as filters and contaminants such as grease and sediment build-up around the spout should be removed prior to testing. Since tap outlets are suspected to be contaminated, disinfection should be conducted by swabbing both outside and inside of the tap several minutes before sample collection. The disinfection reagent can be 0.1% sodium hypochlorite solution (Forensic and Scientific Services, Queensland Government 2008). To get a representative fresh water sample, the tap should be run for a while (about 2-3 minutes) to remove the stagnant water in the tube. From a storage tank For shallow depth (Small water supply tank), To get a representative sample of the source of supply, the sampling depth is recommended to be 0.5m. The bottle inside and the cap inside should not be touched. The neck should be plunged downwards into the water and then turned upwards until the water is overfilled and mouth is towards upside (FSS, QLD Government 2008) For deep depth (Large water supply dam), To get a representative sample, the water sample should be collected by using a suitable depth sampling device such as hosepipe, sampling rod or pump etc. Be careful not to disturb bottom sediment. Wastewater sampling For wastewater sampling, we should take the samples from outlets of wastewater treatment plant. Since we use composite sampling method for wastewater analysis, we should pour equal portions of freshly collected samples into the appropriate container. 2.2.3 Collection instructions According to surface water sampling methods and analysis technical appendices in Western Australia in 2009, the recommended collection techniques are listed as follows: The containers for holding samples should not be pre-rinsed. It is recommend that the bottles should be used to collect sample directly rather than decanting. However, in some cases, decanting samples from big collection vessels into sample vials are acceptable provided that all the containers are free of contamination. For example, sometimes a clean bucket with about 10L capacity or a large 1L breaker can be used to collect the surface sample and then transfer to the laboratory sample container. To minimize the exposure to air and light, samples should be overfilled containers and then the cap should be tightly sealed free of air bubbles and faced down to help prevent leakage. 2.2.4 Complete lab form and sample label After sample collection, we should complete the lab form which contains the sampling information such as water volume, sampling sites, etc and stick label on each sample container which recorded sampling location and time. 2.3 Preservation Filtration For wastewater samples, the filtration treatment should be conducted since some suspended particulates may block the testing instruments. Preservation Some experiments indicated that aldehydes are susceptible to microbiological decay. To inhibit microbial decomposition of organic compounds, it is recommended to add 0.1 ml of CHCl3 (Economou et al., 2002) or alternatively, 15 mg of copper sulfate pentahydrate in water samples (US EPA 1998). 2.4 Sample transportation and storage Transportation During transportation process, we should minimize the contamination and disturbance to water samples, conserve them in the dark and maintain in cool condition with a chilled insulated container and then return to the lab as quick as possible (Environmental health guide, WA, 2006). Storage Before the lab analysis, the samples are recommended to be refrigerated but not freezed at 1-4 °C in the dark. Available techniques for sample extraction 3.1 Available techniques Sample extraction is used to concentrate the analyte for its successful analysis by instruments. There are various methods for FA extraction. Each method has its own characteristics. The object is to choose an optimal technique to avoid excessive loss of the analyte and achieve desired performance. Soxhlet extraction and solvent extraction are traditionally common extraction techniques, particularly for organic compounds. However, since their limitations such as the need of a large volume of solvent, lack of thermal stability and volatility of some analytes and interference from contaminants in the extraction thimbles (Grob et al., 2004), they may not desirable for FA extraction. According to recent studies and researches, some extraction techniques have already achieved good results. They are listed as follows. Solid-phase extraction (SPE) SPE is a technique including two extraction steps. The first step is the non-equilibrium removal of the analytes from the liquid sample by retention on a sorbent. The second step is the solvent elution or thermally desorption of the selected analytes (Grob et al., 2004). One successful approach for determining formaldehyde in drinking water has been to use colorimetric-solid phase extraction with EmporeTM Anion Exchange-SR 47-mm extraction membranes as extraction cartridges and elution from the SPE cartridge by sodium hydroxide solvent (Hill et al., 2009). Another successful approach for formaldehyde analysis in water was by using poly (allylamine) beads for solid-phase extraction and eluting from the C18 cartridge by hydrochloric acid (HCl) solvent (Kiba et al., 1999). SPE is one of the most widely used techniques in FA analysis. Due to its high sensitivity and efficiency, it can determine the low FA concentrations down to 80 ppb by several minutes (Hill et al., 2009). However, one of the drawbacks of it was its e high packing and sorbent selection requirements which might be costly and time-consuming in stuff preparation. Another problem is SPE may have analyte loss during elution when analyte passing though tube. Ultrasonic extraction (USE) USE is a fast technique using ultrasound assisted method to assure good contact between sample and solvent (Grob et al., 2004). One of the researches has stated the successful use of USE in FA extraction. Formaldehyde was first extracted with water by ultrasound assisted, and directly introduced into a derivatization column which was packed with a moderately sulfonated cation-exchange resin. The resin was charged with 2, 4-dinitrophenylhydrazine (DNPH) previously and used as solid support. The formaldehyde DNPH derivative was eluted by sodium dihydrogen phosphate in 50% ACN solvent (Chen et al., 2008). Compared with traditional techniques, this method was proved to be fast, accurate, sensitive and labour-saving. In addition, only small quantities of solvent and sample were required. Therefore, its a promising extraction method (Chen et al., 2008). However, the drawback of this method was its low recovery efficiency. For low concentrations of analytes in samples, multiple extractions are often required (Grob et al., 2004). Supercritical fluid extraction (SFE) SFE is a fast and efficient technique. Analytes are more soluble in supercritical fluids (SFs), which are dense gases above their critical temperature and pressure, when they are in their liquid state. Therefore, the important properties such as the melting point and solubility of analytes in the SF are needed to be considered (Grob et al., 2004). A study has stated the SFE in FA analysis using CO2 as the extraction fluid, and the experiment was carried out at 13.8 MPa, 120 °C with 15 min of static extraction time, 15 min of dynamic extraction time and 80 ÃŽ ¼l of modifier (methanol). The DADHL derivative which was the product of the condensed FA with ammonia and acetylacetone can be detected by UV spectrometer (Reche et al., 2000). However, one of the drawbacks of SFE in FA analysis was the use of supercritical CO2 fluid. Since the low polarity of CO2 but the polarity of FA, the extraction was difficult and recoveries are poor. Solid-phase microextraction (SPME) SPME is a good extraction method which can incorporate with GC or HPLC to get the high performance in sample analysis. It used a fiber coated with an extracting phase which can concentrate the analytes and then the fiber is transferred to the injection port of separating instruments and analytes are desorbed from the fiber and rapidly delivered to the column (Pawliszyn, 2009). One of the researches has stated the SPME experiment for FA analysis. Prior to use, the 75 ÃŽ ¼m Carboxen-Polydimethylsiloxane fiber was conditioned in the injection port of GC at 300 °C under helium flow for 1.5 h. Then the extraction was carried out at 80 °C for 30 min using the fiber with a medium stirring of sample. Next, the thermal desorption was reacted in a splitless mode at a temperature of 310 °C for 3 min (Bianchi et al., 2007). SPME technique has some advantages in FA. SPME is a simple, easily-conducted and solvent-free technique. The detection limits can reach parts per trillion which is really useful in FA analysis since the concentration of FA is water is very low. In addition, SPME is fast and low cost which can minimize sample holding times, reduce analyte loss and sample contamination (Trenholm et al., 2008). However, one of the problems is SPME may have analytes loss during extraction that nearly 1% of analytes goes on fiber (Leap technologies website). Stir bar sorptive extraction (SBSE) The theory of SBSE is similar to SPME which used a spinning glass-covered magnetic bar coated with a thick layer of polydimethylsiloxane to extract analytes, then thermal desorption can be carried out in the GC injection port (Grob et al., 2004). SBSE has been applied successfully to trace analysis especially VOCs and semi-volatile compounds in environmental, biomedical and food applications. The detection limits can be extremely low which are suitable for FA analysis in water (David et al., 2003). Theres limited information of FA analysis relating to SBSE technique, however, its still a promising method in the future. Newer techniques Newer techniques such as pressurized liquid extraction (PLE), subcritical water extraction (SWE) and microwave-assisted solvent extraction (MWE) are enhanced liquid extraction techniques. Compared with traditional soxhlet extraction and solvent extraction, these methods are less time-consuming, less solvent consumption and more efficient and can be exerted to low concentrations of analytes in samples. 3.2 Water and wastewater sample extraction For drinking water formaldehyde analysis, we can use solid-phase extraction (SPE) which is commonly used, easy-operated and available in laboratory. For wastewater formaldehyde analysis, we should remove particulates by filtration prior to extraction because particulate matter in the sample can interfere with the analysis such as absorbing some analytes of interest and causing low analytical recoveries. And then we can use SPE, USE, SPME or other advanced techniques for sample extraction. Current techniques for sample analysis Spectrophotometric methods The theory of spectrophotometer is to measure the intensity and amount of light which have been absorbed or reflected by the analytes as function of colour or wavelength (Skoog et al., 2007). 4.1.1 Reflectance spectrophotometer One of the studies has developed a method that successfully monitored the FA concentrations in water samples using purpald as the colorimetric reagent (Hill et al., 2009). Firstly, a colourless intermediate was formed by purpald reacting with FA in alkaline solution. And then an intensely purple tetrazine was formed due to the oxidization of intermediate. The purple tetrazine was served as the colorimetric product (Dickinson et al., 1974). After completing the colour reaction in the syringe, the 1mL sample is passed through an extraction disk. The amount of extracted analyte is then measured on-disk using a BYK-Gardner diffuse reflectance spectroscopy. The reflectance data can be collected at 20nm intervals over the visible spectral range. After that, the BYK-Gardner QC-Link software in PC is used to calculate the Kubelka-Munk function F(R). Then the analyte was compared to a calibration plot of F(R) at 700nm which was the most effective analytical wavelength to determine the FA concentration (Hill et al., 2009). This method successfully analyzed the FA concentrations at the range of 0.08 to 20ppm using only 1mL samples and just costing several minutes. 4.1.2 UV/Visible spectrophotometer UV/Visible spectrophotometer can be used to measure the absorbance which is the difference of intensity of light before and after passing through a sample by an object as function of wavelength or color (Skoog et al., 2007). One of the studies for FA determination used Hantzsch Reaction for derivatives. The colourless solution became yellow colour gradually owing to the synthesis of DADHL which formed from the condensed formaldehyde and acetylacetone in the condition of excess of ammonium salt. Then UV/Vis detection was carried out with a UV-1603 Spectrophotometer (Reche et al., 2000). The maximum absorbance was approximately at 415nm which was used in analyzing FA concentration in sample and the standard solution (Shimadzu Application News). 4.1.3 Advantages and disadvantages The spectrophotometers are widely used in many laboratories and institutes. This method has advantages such as the lower instrument capital and operational cost and easy operation. However, the sensitivity and selectivity are lower than GC and HPLC method. For extremely low FA concentrations in water sample, this method is limited in application. 4.2 Chromatography methods For chromatography, since FA concentrations in water and wastewater are very low, it must be derivatized prior to analysis to ensure quantitative and qualitative detection. Nash reagent, dinitrophenylhydrazine and PFBHA reagent are typical agents which can have color reactions with FA. Then their derivatives can provide better sensitivity for UV, fluorescence or MS detection (Michels et al., 2001). 4.2.1 GC GC is widely used in FA analysis. According to many researches using GC for FA determination, the mobile phase is usually helium and the different stationary phases were covered on column. By measuring the different retention time of the analytes, FA concentrations can be calculated out. 1. GC/MS In FA analysis, different molecules in solution can be separated during the sample travel by GC and then the mass to charge ratio of ionized fragments of FA can be detected by MS (Robert et al., 2007). One of the researches used the pentafluorobenzyl hydroxylamine (PFBHA) reagent to form derivatives and then used a Varian CP-3800 GC system connected with a Varian 4000 ion trap MS system for detection. The injector was operated at 250 °C in split mode and separation was conducted on a 0.25ÃŽ ¼m DB5-MS capillary column. Electron impact ionization (EI) in full scan from 150 to 275m/z was used in MS analysis (Trenholm et al., 2008). Then the FA concentrations can be measured by recording mass to charge ratio. Another similar study also used PFBHA derivatization reagent and GC/ MS method (Bianchi et al., 2007). Advantages and disadvantages of GC/MS: Combining GC with MS can have better identification and separation of molecules than single GC since molecules behave different in GC and MS. For FA analysis, GC/MS is widely used due to its rapid operation, high precision and selectivity. Having considered its good performance and cost effectiveness, it is proposed to be an alternative of traditional methods (US EPA 1998). One of the drawbacks is GC/MS is less sensitive than HPLC in identification of the FA derivatives. Another problem is GC/MS is susceptible to interference. For wastewater sample, the compounds are often complex, therefore the interferences may lead to imprecise analysis. 2. GC/ECD US EPA offers another alternative method to measure FA.  The oxime derivatives were formed by adding pentafluorobenzyl hydroxylamine (PFBHA) reagent to FA solution at pH of 4. Then they are extracted from the water with 4mL hexane. After processing through an acidic wash step, the extracts are analyzed by GC with electron capture detection (GC/ECD). After comparing with the calibration standard, the analytes can be identified. Two chromatographic peaks have been observed for FA that both (E) and (Z) isomers are formed for FA carbonyl compounds (US EPA 1998). Comparison of GC/MS and GC/ECD: ECD offers better detection limits (

Monday, August 19, 2019

Hurricanes Effects on Society Essay -- Nature Storms Weather Hurrican

Hurricanes' Effects on Society Hurricanes are one of nature’s most natural occurrences and intense phenomenal storms. Yet, as phenomenal as they are, they are still one of the deadliest and disastrous natural occurrences that continue to plague costal residents with fears of their homes being destroyed, their towns wiped out, and loved ones either disappearing or dying. Roger A. Pielke Jr. and Roger A. Pielke Sr. in their book Hurricanes: Their Nature and Impacts on Society, state that the hurricane is a member of a phenomena called cyclones, which refers to â€Å"any weather system that circulates in a counterclockwise direction in the Northern Hemisphere and in a clockwise direction in the Southern Hemisphere† (p.15). The word â€Å"hurricane,† originating from the Spanish word huracan, probably came from the Carib and other Indian tribes that once inhibited the Caribbean islands and Central and South America (Tufty p.13). According to Barbara Tufty’s Hurricanes, the Guatemalan Indians called the god of stormy weather Hunrakan, while the Quiche of southern Guatemala spike of Hurakan as their god of thunder and lightning (p.13). Hurricanes are defined as large, rotating storms with strong blowing winds around the â€Å"eye,† or relatively calm center, where winds and rain clouds spiral in large bands (Tufty p.1, 13). According to Nature’s Hurricane Recipe by James C. White II, Hurricanes are rated on the Saffir-Simpson Hurricane Scale on a scale from one to five, based on the intensity of the hurricane, with wind speed being the determining factor. A category one hurricane sustains winds of 74 to 95 mph, with the storm surge being about four to five feet, and causing no real damage to building structures. A category two ... ...l buildings, rural neighborhoods, and crops and livestock. References Landsea, C.W., Franklin, J.L., McAdie, C.J., Beven, J.L., Gross, J.M., Jarvinen B.R., et al (2004). A Reanalysis of Hurricane Andrew’s Intensity. Bulletin of the American Meteorological Society, 85, 1699-1712. Pielke, R.A Jr, Pielke R.A. Sr (1997). Hurricanes: Their Nature and Impact on Society. NY: John Wiley & Sons Inc. Rodriguez, H. (1997). A Socioeconomic Analysis of Hurricanes in Puerto Rico: An Overview of Disaster Mitigation and Preparedness. 121-143. In H.F. Diaz and R.S. Pulwarty (Eds.), Hurricanes: Climate and Socioeconomic Impacts. Germany. Springer-Verlag Berlin Hiedelberg. Tufty, B. (1970). 1001 Questions Answered about Storms and Other Natural Disasters. NY: Dover Publications, Inc. White, J.C. (2005). Nature’s Hurricane Recipe. Mercury. 34, 28-33.

Sunday, August 18, 2019

Literary Utopian Societies Essay examples -- essays research papers f

Literary Utopian Societies â€Å"The vision of one century is often the reality of the next†¦Ã¢â‚¬  (Nelson 108). Throughout time, great minds have constructed their own visions of utopia. Through the study of utopias, one finds that these â€Å"perfect† societies have many flaws. For example, most utopias tend to have an authoritarian nature (Manuel 3). Also, another obvious imperfection found in the majority of utopias is that of a faulty social class system (Thomas 94). But one must realized that the flaws found in utopian societies serve a specific purpose. These faults are used to indicate problems in contemporary society (Eurich 5, Targowski 1). Over the years, utopian societies have been beneficial in setting improved standards for society. By pointing out the faults of society, improvement is the most likely next step. Citizens should take advantage of utopian literature in order to better future societal conditions (Nelson 104). Because it is impossible to create a perfect society in whi ch everyone’s needs can be met, society must analyze utopias in order to improve their existing environment. Plato’s Republic was the first â€Å"true† work considered to be utopian literature. In fact, the Republic influenced almost all later text written on the subject of utopia (Manuel 7). Although the Republic was one of the most influential works in utopian literature, the society that it represented also had many obvious flaws. First, Plato’s utopia had a distinct class system (Morely iii, Bloom xiii). The privileged class that ruled the society also enforced censorship in order to keep control over the Republic (Manuel 5). To perform all of the lowly tasks of the society, a system of slavery was enforced (Manuel 9). In addition, different forms of propaganda were used to keep the citizens in check (Manuel 5, Bloom xiv). The political and economic systems, in which the wealthy class controlled all the funds, were extremely restrictive (Mumford 4, Bloom xiii). With the society being in opposition to change, it would have obviously failed. A static society, in which propaganda is used to promote the State, disrupts the creative thinking process. And, without the creative thinking process, intellectual growth as a whole also slows (Mumford 4, Benz 3). Yet another famous Utopian society that appears to thrive on the surface is that of Sir Thomas More’s Utopia. More’s society was ... ...us. Brave New World. New York: Harper & Brothers, 1932. Kateb, George, ed. Utopia. New York: Atherton Press, 1971. Manuel, Frank E., ed. Utopias and Utopian Thought. Boston: Houghton Mifflin Company, 1966. Morley, Henry, ed. Ideal Commonwealths. New York: Kennikat Press, 1968. Mumford, Lewis. The Story of Utopias. New York: The Viking Press, 1962. Nelson, William, ed. Twentieth Century Interpretations of Utopia. Englewood Cliffs, New Jersey: Prentice-Hall, 1968. Taragowski, Henry W. Utopia. 6 Jan. 1999 . Thomas, John L., ed. Looking Backward 2000-1887. Cambridge, Massachusetts: Harvard University Press, 1967. Utopia and Utopian Philosophy. Ed. Jon Will. 1999. Utopia Pathway Association. 6 Jan. 1999 . Validation of Electronic Sources Phillip Benz received a Master’s Degree in English Literature and currently teaches in France. Philip Coupland is a professor at Warwick University. Jon Will is the Vice President of the Utopia Pathway Association. Henry Taragowski is a professor at Xavier University. Peter Fitting is the Chairman of the Society for Utopian Studies.

John Cage Essay -- Biography Bio Musician 1950s

John Cage Defined in the 1950s John Cage is considered by many to be the defining voice of avant-garde music throughout the 20th century. Fusing philosophy with composition, he reinvented the face of modern music, leading composer Arnold Schoenberg to declare, "Of course he's not a composer, but he's an inventor -- of genius" (Kostelanetz 6). For Cage, the 1950s brought a series of critical events that both refined his message as a composer and brought him great fame, or infamy to some. His interest in Eastern Zen philosophy blossomed throughout the early part of the decade, a subject that is actively pursued and reinforced in all of his following musical works. The 1950s also brought the revelation for Cage that sound is inherently present in all of us when he entered an anechoic chamber at Harvard University. This manifested in his work as the famous "silent" piece 4'33". Cage's involvement at Black Mountain College during this period contributed remarkable development to his music and ideas that defined the res t of his works. The 1950s were the defining decade for the career of philosopher and composer, John Cage. Cage was born into a Los Angeles middle class family in 1912. His father was a less than successful inventor -- dabbling in the areas of submarines, medicine, space travel, and electrical engineering -- who instilled in him the idea that "if someone says 'can't', that shows you what to do." (Cage, An Autobiographical Statement) Cage learned how to play the piano as a child and took a liking to Grieg, and even briefly considered becoming a concert pianist. However, when Cage went to college it was to become a writer. He was deeply disillusioned by the conformity he saw in the students: I was shocked at college... ... remainder of his life. References Cage, John "An Autobiographical Statement" 1988 http://www.newalbion.com/artists/cagej/autobiog.html Cage, John. For the Birds: John Cage in conversation with Daniel Charles. Salem, NH: Marion Boyars. 1976. Cage, John. Silence: Lectures and Writings by John Cage. Hanover, NH: Wesleyan University Press. 1961. Kostelanetz, Richard. Conversing with Cage. New York, NY: Routledge. 2004 (1987 orig.). Patterson, David Wayne. Appraising the Catchwords, c. 1942-1959: John Cage's Asian-Derived Rhetoric and the Historical Reference of Black Mountain College. PhD thesis, Columbia University. 1996 Pritchett, James. The Music of John Cage. New York, NY: Cambridge University Press. 1993. Solomon, Larry J. PhD. "The Sounds of Silence: John Cage and 4'33"". Pima College, 1998. http://music.research.home.att.net/4min33se.htm

Saturday, August 17, 2019

Authority Essay

Define the term â€Å"authority.† What does it mean to be authoritative, and how do you go about establishing whether a source is, indeed, credible? Why is it important to not only invite authorities to speak in your writing, but also to establish your own authority as you write? Authority by definition: the power to give orders or make decisions, or the power or right to direct or control someone or something, or the confident quality of someone who knows a lot about something or who is respected or obeyed by other people (Merriam-Webster, 2010). Figures of authority are extraordinarily significant to the credibility within any paper. Including citation from members of society with an advanced skill set will not only solidify proposed ideas, but can also aid in swaying an argument (Ballenger, 17). Valid credibility can go a long way in improving the impact a piece makes on its reader. While it is important to include factual information of the writer’s proposed idea, it is equally important to establish a voice within the piece. Each article of information that comes from a professional standpoint can be a stepping stone towards the finished product of the writer’s work. Weaving an authoritative voice simultaneously strengthens the paper as well as the validity of the writer’s work. Lastly, citing authoritative individuals in a piece will grant the permission of their facts without sending the writer towards plagiarism. Although it is often unintentional, plagiarism happens quite frequently. It is imperative to the writer that citations of an authority figure (ie: scholars, researchers, critics, or specialists) are included in their piece to ensure the professionalism of their message can be brought to light using convincing sources.

Friday, August 16, 2019

John Lock’Es View on Innate Knowledge Essay

John Locke, a renowned English philosopher in the seventeenth century, argued against the pre-existing prevalent belief of innate knowledge, such as those led by Descartes. Many of Locke’s arguments begin with criticisms on philosophers’ opinion on innate knowledge, notably Descartes. Therefore, many of Locke’s arguments are direct rebuttals of Descartes and other philosophers’ beliefs about the existence of innate knowledge. To arrive at the conclusion that innate knowledge is impossible, Locke comes with various premises and rebuttals that add weight to his arguments. First, Locke emphasizes that knowledge and ideas are learned through experience, not innately. He argues that people’s minds at birth are ‘blank slate’ that is later filled through experience. Here, the ‘senses’ play an important role because ‘the knowledge of some truths, as Locke confesses, is very in the mind; but in a way that shows them not to be innate’. By this, Locke argues that some ideas are actually in the mind from an early age but these ideas are furnished by the senses starting in the womb. For example, the color blue and the  Ã¢â‚¬Ëœblueness’ of something is not that which is learned innately but is some is learned through exposures to a blue object or thing. So if we do have a universal understanding of ‘blueness’, it is because we are exposed to blue objects ever since we were young. The blue sky is what many would acquaint with blue easily and at a young age. Second, Locke argues that people have no innate principles. Locke contended that innate principles rely upon innate ideas within people but such innate ideas do not exist. He says this on the basis that there is no ‘universal consent’ that everyone agrees upon. Locke quotes that ‘There is nothing more commonly taken for granted that there are certain principles universally agreed upon by all mankind, but there are none to which all mankind give a universal assent’. This argues against the very foundation of the idea of innate knowledge because principles that garner universal assent are thought to be known innately, simply because it is the best explanation available. However, it cannot even be an explanation for such belief because no ‘universal consent’ exists. Rationalists argue that there are in fact some principles that are universally agreed upon, such as the principle of identity. But it is far-fetched to claim that everyone knows this principle of identity because for the least, children and idiots, the less-intelligent ones are not acquainted with it. There are several objections to these premises and arguments that are outlined above. The argument by Locke that there are some ideas that are in the mind at an early age gives credence to argument for the innate ideas. For ideas to be furnished by the senses later on there has to be ideas that are laid as foundations. If such ideas are innate, as acknowledged by Locke, no matter how trivial or less significant these ideas may be as one may argue, such claim could give weight to the idea of innate knowledge. Innate knowledge or ideas, after all, doesn’t imply that all ideas are innate because as one can see, there are things that we learn through our experiences and encounters in life as well. So as long as there is even the basic principle that is innate early in life, then innate knowledge can be known to exist. The validity behind the claim that there is no ‘universal consent’ is also questionable. Locke argues that no principle that all mankind agrees upon exists because there are those who are not acquainted with such principle, notably children and idiots. However, the terms children and idiots are somewhat misguided. How are children and especially the idiots categorized? Is there a specific criteria used for those who are classified as idiots? It is hard to generalize that idiots or those who are deemed less intelligent are not acquainted with certain principles because at times, intelligence is not the best indicator of someone’s knowledge or ideas. There are many intelligent people out there who take their status for granted and do not think, contemplate or make an effort to their best extent. The objections that are made against the initial arguments can be defended in certain ways. Regarding the objection that since there are innate ideas in the mind at an early age, innate knowledge exists, the term ‘innate’ should be thought of again in greater detail. Innate knowledge has to be significant enough for us to recount to be considered such. Thus, there comes a risk with considering the ideas within our minds early on as innate. For example, the knowledge of our hands and feet maybe imbedded to us at a very early stage. The knowledge of using our hands and feet are not so significant. The knowledge that we gain through our use of hands and feet could be vital knowledge that we may recount throughout. Throwing a baseball properly under a coach’s instructions is an example. Also, there is the claim that intelligence cannot be the sole indicator of one’s acquisition of ‘universal consent’ and that there isn’t a clear distinction of those who can understand universal principles to those who cannot. However, the important focus here should not be on defining ‘idiots’ and intelligence but on that universal consent is hard to be assembled by every single mankind. Therefore, more should be considered than just innate knowledge that could garner universal consent. Empirical principles that are derived from experience could garner universal assent too. For example, the fear of ‘dying’ or ‘getting seriously injured’ could mean that people would not jump out the roof from tall buildings. And this belief could be universal among all.

Thursday, August 15, 2019

Describe the Social, Economic and Cultral Factors

These days children and young people are involved in many issues in society which can/may affect their lives. Religion is all across the UK now and many children who live here have a different type of religion. Religions have different rules to others and this can affect children because of them (rules). For example if a Muslim child is friends with a child who doesn’t have a religion and that child can go out in the street or can sleep out, the child who is Muslim might not be able to do that and that might make them feel isolated and upset.Or children who have come from another country, their parents have a different cultural background to other people which mean the child will be raised differently and have different views, which can cause conflict among other children who have been brought up in the British culture. Personal choice is another thing that could impact a childs life, if a childs parents make a choice to live in a different way e. e same sex parents or travell ing a lot then this could affect a childs education because they’d have to travel loads as part of the travelling community.Another factor could be social, a child or young person could have only 1 friend and stick to them but that friend might want to go off with other children sometimes which can make a child feel lonely and they might find it hard then to make new friends. Or a child could be with everyone always and this is good because it’s good to develop social skills and how to socialize but this could also be bad because they aren’t as independent as they should be.Also family has a big impact, a child could be a ‘young carer’ because there mum or dad is disabled this could make them feel upset and worried all the time, which would affect school work and could affect health, if no money is coming in to feed or shelter them. Some families may have different styles of parenting, they might expect highly of their child, if there is lack of sup port that can lead to low self esteem. Other things that could affect social factor is disabilities, children who may have a disability might find it hard to fit in or make friends.If children are suffering from problems at home, then if a child attends a setting (nursery, school, youth clubs) then they could get social services involved which could then result in children taken into care. Another part of economic factors can include addictions, parents might have a drug addiction which would mean all the income being spend on drugs and then not being able to afford a house in a decent community, this could affect a childs development if they are living in cramped conditions or poor quality housing.

Wednesday, August 14, 2019

Karl Marx, Max Weber and Emile Durkheim offered differing perspectives on the role of religion Essay

Karl Marx, Max Weber and Emile Durkheim offered differing perspectives on the role of religion. Choose the theorist whose insights you prefer and outline how they perceived religion operating socially. Discuss why you chose your preferred theorists views over the others. Marx, Durkheim and Weber each had different sociological views of the role and function of Religion. My preferred theorists view’s on Religion is Karl Marx’s as I feel his ideas are more relevant to what Religion actually is. And I have chosen Marx’s theory on Religion as I feel that it is the most similar to my own views on the subject. His views are more interesting to me as I don’t practise any Religion and his views expand on some of my own thoughts that I have had about Religion. It also has more relevance in society today as people are now struggling due to the economic down turn which is completely testing people’s faith. There is a bigger decline in this century as most of the population of the world have more resources and freedom of speech, to decide how they really feel about Religion and aren’t blind-sighted by the church anymore. Even if people are not aware of Marx’s ideas about Religion I feel that the majority of people would have similar views based on these ideas as times have gotten harder thus making people question their own beliefs. I will also briefly outline each of the theorist’s workings on Religion and then discuss why I chose Karl Marx’s theories. Karl Marx’s outlook on Religion was that it was a deception of sorts, as it was to give people false hope of something better waiting for them as they were being exploited and oppressed by these religious ideals. Marx thought it was a result of a class society because not only was its aim to ease the pain of oppression it also acted as a tool of that oppression. (McDonald, 2009) Emile Durkheim thought that Religion brought communities together and strengthened them. That all religions acted as a ‘socialising agent’ and that they shared a ‘coherent system of beliefs and practices serving universal human needs and purposes.’ He also conducted a study on the Australian Aborigines and concluded that ‘Religion was the source of all harmonious social life.’ (McDonald, 2009) He felt that religion varies between different societies and can influence people’s day to day lives. In 1912 he wrote the ‘Elementary forms of the religious life’ which showed that all religions have certain features in common. Max Weber had a view that  wasn’t too far off of Marx’s theory on Religion as he felt that it just was used to strengthened people’s work ethic and that success through hard work would lead to people’s salvation. He felt that the various religious policies didn’t fit with the development of Capitalism. Religion is defined as ‘The belief in and worship of a superhuman controlling power, esp. a personal God or Gods.’ But when reading Karl Marx’s thoughts on the subject it becomes clearer that not only do you need a strong belief to endure what God’s plan is for you but that it can take away your sense of individuality and force people into a socially regulated group by practicing the church’s ‘norms’. One of his famous analyses of Religion was that it ‘Is the opium of the people.’(Goldstein, / McKinnon 2009) It’s amusing that Marx used opium in comparison with religion seeing as it was used to help people for a while in the 1800’s but with more medicines becoming available, that the use of opium eventually became frowned upon. Ironic then, that this is how many people would perceive the church in Ireland today. In Marx, Critical Theory and Religion Marx, McKinnon writes that ‘For most twenty-first century readers, opium means something quite simple and obvious, and the comparison between the two terms seems perfectly literal. Opium is a drug that kills the pain, distorts reality, and an artificial source of solace to which some poor souls can become addicted; so also religion.’ This metaphor for me shows that of the three theorists Marx was the most realistic and could see through the organised industry that Religion was and is ever more so today. Durkheim’s theories make sense and are for me a nice and fluffy way of looking at Religion, but I have a feeling that if he were to see the route Religion has gone down in modern society would he still feel the same about the majority of Religions, for example the scandal’s in the Catholic church over the past forty years that are only really surfacing now. And Weber’s thoughts were more rational as that what was expected of people was to keep their heads down and they would eventually be rewarded with Heaven. Even if in today’s society more numbers are in decline of practicing religion, Marx’s views on the subject are definitely the most valid. There expectations of people may not be as extreme as they were back in the 1800’s of their followers as they are now, but of the three, Marx’s views are the most realistic of what Religion truly is. His ability to see what religion was actually doing to people’s lives back then is remarkable and for his  words to still have such relevance now in modern society shows that he was extremely perceptive of society. Marxism also assumes that Religion will eventually disappear and for someone to envisage that from over one hundred years ago is clearly someone who knew what they were talking about. And that is why I chose Marx.

Tuesday, August 13, 2019

Study of Nurse Workarounds in a Hospital Using Bar Code Medication Research Paper

Study of Nurse Workarounds in a Hospital Using Bar Code Medication Administration System - Research Paper Example The implementation of BCMA technology might impact negatively on the nurses’ attitudes toward the medication administration process. This, in turn, might make work processes more difficult to nurses while administering medication to patients. This paper will provide a response to Goodner’s journal article as regards to nurses’ perception to the use of BCMA system and then provide my judgment over the issue. It will also review three other journal articles to demonstrate if they agree with my viewpoint. Finally, the paper will list my evaluation and three points of criteria used in my judgment. Research reveals that medication errors are the most frequently experienced preventable errors at (19%) according to Gooder (2011). Gooder notes that most (34%) medication errors take place during medication administration. The impacts of these errors are directly related to patients and can cause grave injuries. It is for this reason that the Institute of Administration (IOM) recommended the introduction of bar coded medication administration system (BCMAs) as a solution to medication administration errors. This, argues Gooder, will reduce medication errors by about 86%. This is true because it will enhance the prevention of patient injuries, which have characterized most of today’s hospitals. On the other hand, the technology will also improve the overall quality of services offered in the hospital. With the application of the technology, there will be faster administration of medication and improved accuracy in service delivery. This will improve the overall satisfaction o f patients. In spite of the benefits of the BCMA system as regards error reduction, Gooder notes some concerns about its safety and effectiveness. Among the concerns is the non-compliance with the BCMA system by nurses in many hospital settings.