Tuesday, November 5, 2019

Nitrogen in Tires

Nitrogen in Tires Question: What makes nitrogen in tires better than air? I see a lot of tires with the green cap indicating they are filled with nitrogen. Is there any advantage to putting nitrogen in my automobile tires instead of compressed air? How does it work? Answer: There are multiple reasons why nitrogen is preferable to air in automobile tires: better pressure retention leading to increased fuel economy and improved tire lifespancooler running temperatures accompanied by less pressure fluctuation with temperature changeless tendency toward wheel rot To understand why, its helpful to review the composition of air. Air is mostly nitrogen (78%), with 21% oxygen, and smaller amounts of carbon dioxide, water vapor, and other gases. The oxygen and water vapor are the molecules that matter. Although you might think oxygen would be a larger molecule than nitrogen because it has a higher mass on the periodic table, elements further along an element period actually have a small atomic radius because of the nature of the electron shell. An oxygen molecule, O2, is smaller than a nitrogen molecule, N2, making it easier for oxygen to migrate through the wall of tires. Tires filled with air deflate more quickly than those filled with pure nitrogen. Is it enough to matter? A 2007 Consumer Reports study compared air-inflated tires and nitrogen-inflated tires to see which lost pressure more quickly and whether the difference was significant. The study compared 31 different automobile models with tires inflated to 30 psi. They followed the tire pressure for a year and found air-filled tires lost an average of 3.5 psi, while nitrogen-filled tires lost an average of 2.2 psi. In other words, air-filled tires leak 1.59 times more quickly than nitrogen-filled tires. The leakage rate varied widely between different brands of tires, so if a manufacturer recommends filling a tire with nitrogen, its best to heed the advice. For example, the BF Goodrich tire in the test lost 7 psi. Tire age also mattered. Presumably, older tires accumulate tiny fractures which make them more leaky with time and wear. Water is another molecule of interest. If you only ever fill up your tires with dry air, the effects of water arent a problem, but not all compressors remove water vapor. Water in tires should not lead to tire rot in modern tires because they are coated with aluminum so they will form aluminum oxide when exposed to water. The oxide layer protects the aluminum from further attack in much the same way chrome protects steel. However, if you are using tires that do not have the coating, water can attack the tire polymer and degrade it. The more common problem (which I have noted in my Corvette, when I have used air rather than nitrogen) is that water vapor leads to pressure fluctuations with temperature. If there is water in your compressed air, it enters the tires. As the tires heat up, the water vaporizes and expands, increasing tire pressure much more significantly than what you see from the expansion of nitrogen and oxygen. As the tire cools, pressure drops appreciably. The changes reduce tire life expectancy and affect fuel economy. Again, the magnitude of the effect likely is influenced by brand of tire, age of tire, and how much water you have in your air. The Bottom Line The important thing is to make sure your tires are kept inflated at the proper pressure. This is much more important than whether the tires are inflated with nitrogen or with air. However, if your tires are expensive or you drive under extreme conditions (i.e., at high speeds or with extreme temperature changes over the course of a trip), its worth it to use nitrogen. If you have low pressure but normally fill with nitrogen, its better to add compressed air than wait until you can get nitrogen, but you may see a difference in the behavior of your tire pressure. If there is water in with the air, any problems will likely be lasting, since theres nowhere for the water to go. Air is fine for most tires and preferable for a vehicle youll take to remote locations, since compressed air is much more readily available than nitrogen.

Saturday, November 2, 2019

Advantages And Disadvantages Of A Corporation Diversifying Essay

Advantages And Disadvantages Of A Corporation Diversifying Internationally - Essay Example Diversification may be used to refer to the variation between businesses within a company. This variation may be by products and/ or services. Diversification meaning varies across businesses, as what stands as diversification in one organization may not have significance in another. Thus, the definition of diversification is subjective. Nonetheless, business diversification may be in the dimension of cost leadership, production of commodity products, new product development, market leadership, strong brand names, high value added products, niche markets served, customers shared, advertisement emphasis, customer service emphasis and product design. Other dimensions may be emphasis on research and development, raw materials used, quality emphasis, distribution networks, company size. International diversification entails diversifying an investment portfolio across diverse geographic regions in order to lessen the overall peril and enhance returns on the portfolio. Corporations embrace international diversification by locating their operations in diverse nations and regions so as to reduce operational and business peril. There are three types of international diversification that is, related diversification, unrelated diversification and single product strategy. ... pany level tactic founded on a multibusiness model with the aim of increasing profitability through the use of common organizational capabilities to augment the performance of all the company’s business units. Firms that pursue this mode of diversification strategy are referred to as conglomerates, implying business organizations that function in numerous diverse industries. Advantages of international diversification Diversification and profit stability The assertions associating diversification on profit stability revolve around the portfolio concept, which holds that investing in diversified stock with non related profits may lower the precariousness of a corporation’s total gains. The idea of portfolio relates to product diversification, which may lower the variance of a company’s total profits. The reason is that the unpredictability of various profit schemes merged is nearly always less than the unpredictability of every profit stream independently, on cond ition that the profit streams are negatively related. Researches establish that product diversifiers actually enjoy higher profits than non diversifiers. The degree of risk reduction through unrelated diversification may exceed that which may be attained through related diversification. The reason is that unrelated diversification could lower industry specific systematic risk because it entails diversification across numerous industries. On the other hand, related diversification may not lower industry specific systematic risk happening within an industry. Industry specific systematic risks are the risks universal to all businesses in a certain industry (Kim, Hwang & Burgers, 1989, p 47). Rugman observed the same view, that geographical diversification through direct overseas investments evens out a

Thursday, October 31, 2019

LOST Files in a hospital Essay Example | Topics and Well Written Essays - 2000 words

LOST Files in a hospital - Essay Example It is through the negative facts that the nurse presented to me that I was able to find out the possible solutions for the problem. I have always been very effective in working with the organizations and so as my habit goes I was able to present the solutions as demanded by the board of directors. The faults I identified might be quite expensive to fix but the results are worth spending for. If not fixed now, leakage of information may cause even greater damage in the near future. The best security the organization should offer is security for the patients’ information. It is well known as the main duty of every medical institution apart from the duty of providing ultimate care. An estimated $150,000 will be needed in fixing the security situation once and for all. Some more $120,000 will be required for computer training. I can comfortably promise the board that there will be no regression if all my recommendations are attended to fully. This will include upgrading the entire I.T department, installation of up to date surveillance systems, and educating the nurses on the importance of computer security. As much as we train our employees, it would also be of much help we hired more trained and experienced employees to assist in directing our own. This will only cost the organization at most an extra $60,000. Apart from the problem of using a Management Information System that is not upgraded, the organization is also facing other problems that resulted to the agony. This is all from the results that were presented before me by the nurse that I hired. Basically, the other major problem is the level of ignorance of the employees as far as maintenance of the security for the organization is concerned. According to the report I received, many nurses log in with their passwords then leave the system open and accessible to any stranger. This enhances leakage of important and preserved information of the organization. The solution

Tuesday, October 29, 2019

Should the U.S. Government Levy Additional Fines or Taxes on Companies Essay

Should the U.S. Government Levy Additional Fines or Taxes on Companies That Ship Jobs Overseas - Essay Example The paper tells that according to the McKinsey Global Institute the threat posed by shipping jobs abroad has been grossly exaggerated. To start with, they argue that the number of jobs lost per year to offshoring is far fewer than the normal rate of job turnover in the economy. Secondly, savings from offshoring enables companies to invest in future technologies that create more jobs at home and abroad. Thirdly, global competition improves the skills of American companies making them more competitive. Companies that offshore have the opportunity to take advantage of distinctive skills that are available overseas. Fourthly, the U.S. runs a trade surplus in services. This means that America needs other countries to buy its surplus services. If America refuses to similarly offer overseas countries a platform for trade – by refusing to procure their services – these countries may opt to retaliate and thus leave the U.S. with no one to trade in its excess capacity. On the con trary, in â€Å"It's a Flat World After All† argues that the convergence of information and communication technologies (ICTs) have leveled the playing field and if not addressed as a critical issue by the U.S. policies, it could signal the end of American wealth and global dominance. He further argues that whereas in the past American companies offshored primarily to minimize production costs, nowadays they do so because they are unable to find the talent they need locally. Nobel Laureate Paul Samuelson agrees with Friedman when he states that free trade could leave rich countries worse off by eroding them off their comparative advantages. Moreover, who says that China, India, Russia and the other emerging economies are content with providing low-end, low-wage jobs for eternity.

Sunday, October 27, 2019

Kansas Gun City Experiment | Research Analysis

Kansas Gun City Experiment | Research Analysis INTRODUCTION This paper provides a critical assessment of a level 3 impact evaluation that was assigned in 2012. The study chosen was the â€Å"Kansas City Gun Experiment† which was undertaken by Sherman and Rogan (1995). This paper analyses how well the selected study addressed the issues of reliability of measurement, internal validity of causal inferences, external validity of conclusions to the full population the study sampled and the clarity of the policy implications of applying the results in policing. This essay is divided into six areas. Firstly, a summary of the Kansas City Gun Experiment was presented. This summary gives a brief account of the history of the experiment as well as describes the criminological theories to which the experiment was based, the methodological processes of the experiment and a brief description of the findings of the experiment. Following the summary the essay verges onto the main assessment of the study. Firstly the reliability of measurement of the study is critiqued by examining its test-retest reliability and its internal consistency. Secondly the internal validity of causal inferences was assessed to determine whether the causal relationships between the two variables were properly demonstrated. The external validity of conclusions to the full population the study sampled was then assessed followed by the clarity of the policy implications of applying the results in policing. SUMMARY The Kansas City Gun Experiment, carried out for 29 weeks, from July 7th 1992 to Jnuaray 27th, 1993, was a police patrol project that was aimed at reducing gun violence, drive-by shootings and homicides in the U.S.A. It was based on the premise that seizure of guns and gun crime are inversely proportional. This hypothesis was based on the theories of deterrence and incapacitation. The Kansas City Police Department ( KCPD) implemented greater proactive police patrols in hotspots where gun crimes were prevalent. The study of these patrols were studied by Sherman and Rogan 1995) employing the use of quasi-experimental design. Two areas were chosen for the experiment. Beat 144, the target area, was chosen due to elevated incidences of violent crimes including homicides and drive-by shootings. Beat 242 was chosen as the comparison area or control group due to similar numbers in drive-by shootings. The control group which was used to increase the reliability of results was left untreated meaning that no special efforts or extra patrols were carried out. In contrast beat 144 was treated several different strategies for increasing gun seizures. Some of the techniques used included stop and search and safety frisks. Officers working overtime, from 7pm to 1 am, 7 days a week, were rotated in pairs to provide patrols focused solely on the detection and seizure of guns. These officers did not respond to any other calls that were not gun related. Some of the data collected to be analyzed included number of guns seized, number of crimes committed, number of gun related calls and arrest records before initiation of the experiment, during and after completion, for both experimental and control groups. The differences between the experimental and control group were then compared using a difference of means test (t-test). Gun crimes in the 52 weeks before and after the patrols in both the experimental and control group were compared using autoregressive moving averages (ARIMA) MODELS. There was indeed a 65% increase in gun seizure and a decrease in gun crime by 49% in the target area. In the control group, gun seizures and gun crimes remained relatively unchanged. Also, there was no significant displacement of gun crimes to areas surrounding the target area. These results were also similar for homicides and drive-by shootings. Citizen surveys also revealed that most of the general public were less fearful of crime as compared to those in control groups. RELIABILITY OF MEASUREMENT The results of this study suggest that there may be clear implications for other cities wishing to reduce their gun crime. But how valid are these conclusions? How reliable are they? All measurements may contain some element of error. In order for the measurements recorded during the Kansas City Gun experiment to be sound, they must be free of bias and distortion. Reliability and validity therefore are important in this regard. Reliability can be seen as the extent to which a measurement method is consistent. Reliability of a measure can be described as when a measure yields consistent scores or observations on a given phenomenon on different occasions ( Bachman and Schutt 2007, p.87). It refers firstly to the extent to which a method of measurement produces the same results for the same case under the same conditions referred to as test-retest reliability and secondly the extent to which responses to the individual items in a multiple-item measure are consistent with each other known as internal consistency. A measure that is not reliable cannot be valid. Can it be said that the measurements used in the Kansas City Gun experiment were reliable and valid? This can be assessed by firstly by looking at its’ test-retest reliability and then secondly, its’ internal consistency. Test-retest reliability As funding ran out the study was never repeated under the same conditions in beat 144, thus strictly speaking there was never an opportunity to test whether the same or similar results would have been obtained over an equivalent period some time later. Internal consistency The measures used in this study included separate bookkeeping and an onsite University of Maryland evaluator who accompanied the officers on 300 hours of hot spots patrol and coded every shift activity narrative for patrol time and enforcement in and out of the area. Property room data on guns seized, computerized crime reports, calls for service data, and arrest records were analyzed for both areas under the study. Sherman and Rogan (1995) then analyzed the data using four different models. The primary analyses assumed that the gun crime counts were independently sampled from the beats examined before and after the intervention. This model treated the before–during difference in the mean weekly rates of gun crime as an estimate of the magnitude of the effect of the hot spots patrols, and assessed the statistical significance of the differences with the standard two-tailed t–tests (Sherman and Roagn (1995)). A second model assumed that the weekly gun crime data points were not independent but were correlated serially, and thus required a Box–Jenkins ARIM (autoregressive integrated moving average) test of the effect of an abrupt intervention in a time series. A third model examined rate events (homicide and drive-by shootings) aggregated in 6-month totals on the assumption that those counts were independent, using one-way analysis of variance (ANOVA) tests. A fourth model also assumed independence of observations, and compared the target with the control beat in a before–during chi-square-test. The t–tests compared weekly gun crimes for all 29 weeks of the phase 1 patrol program (July, 7, 1992, through Jan. 25, 1993) with the 29 weeks preceding phase 1, using difference-of-means tests. The ARIM models extended the weekly counts to a full 52 weeks before and after the beginning of phase 1. The ANOVA model added another year before phase 1 (all of 1991) as well as 1993, the year after phase 1 (Sherman and Rogan (1995)). It is submitted that Sherman and Rogan (1995) use of the four different models described above attempted to ensure that an acceptable level of triangulation and as such, internal consistency was achieved given the fact that the program design itself did not lend itself to the researcher having data and an opportunity such that responses to the individual items in a multiple-item measure could be checked for consistency. Reliability may be seen as a prerequisite for validity. Therefore the fact that there was never any opportunity to repeat the study, there was never any opportunity to examine whether the same or similar results would have been obtained in beat 144 over an equivalent period some time using the same policing tactics. In other words can it be safely said that the use of the same measures as mentioned above, i.e., the onsite University of Maryland evaluator who accompanied the officers on 300 hours of hot spots patrol together with Property room data on guns seized, computerized crime reports, calls for service data, and arrest records would have yielded similar results? The simple answer is no as it was never done. It is to be noted that the evaluator accompanied the officers on 300 hours of hot spots patrol out of 2,256 (assuming that the 300 referred to patrol car-hours). Is this number statistically sufficient to reduce the occurrence of random errors which occur as a result of over-estimation and under-estimation of recordings? It is accordingly submitted that the level of reliability of measurement is limited to the instance of this study as there is no way of testing its stability short of repeating it. THE INTERNAL VALIDITY OF CAUSAL INFERENCES Validity is often defined as the extent to which an instrument measures what it purports to measure. Validity requires that an instrument is reliable, but an instrument can be reliable without being valid (Kimberlin and Winterstein (2008)). Validity refers to the accuracy of a measurement or what conclusions we can draw from the results of such measurement. Therefore, apart from the issue of reliability discussed above, it must also be determined whether the measures used in the Kansas City Gun Experiment measured what they were suppose to measure and whether the causal inferences drawn possess internal validity. Internal validity means that the study measured what it set out to whilst external validity is the ability to make generalizations from the study (Grimes and Schulz (2002)). With respect to internal validity, selection bias, information bias, and confounding are present to some degree in all observational research. According to Grimes, David, A. and Schulz, Kenneth, F. (2002), selection bias stems from an absence of comparability between groups being studied. Information bias results from incorrect determination of exposure, outcome, or both. The effect of information bias depends on its type. If information is gathered differently for one group than for another, this results in biasness. By contrast, non-differential misclassification tends to obscure real differences. They viewed Confounding as a mixing or blurring of effects: a researcher attempts to relate an exposure to an outcome but actually measures the effect of a third factor (the confounding variable). Confounding can be controlled in several ways: restriction, matching, stratification, and more sophisticated multivariate techniques. If a reader cannot explain away study results on the basis of selection, information, or confounding bias, then chance might be another explanation. Chance should be examined last, however, since these biases can account for highly significant, though bogus results. Differentiation between spurious, indirect, and causal associations can be difficult. Criteria such as temporal sequence, strength and consistency of an association and evidence of a dose-response effect lend support to a causal link. It is submitted that the onsite University of Maryland evaluator who accompanied the officers on 300 hours of hot spots patrol and coded every shift activity narrative for patrol time and enforcement in and out of the area would have been able to give a rough measure of the number of guns seized, whilst the Property room data on guns seized, computerized crime reports, calls for service data, and arrest records would have after analysis indicated whether gun crimes increased or decreased. It could be inferred therefore that as the number of guns seized increased, the level of gun related crimes decreased and that this inference possessed internal validity. THE EXTERNAL VALIDITY OF CONCLUSIONS TO THE FULL POPULATION THE STUDY SAMPLED According to Grimes, David, A. and Schulz, Kenneth, F. (2002), external validity is the ability to make generalizations from the study. With regard to the Kansas City Gun Experiment, the question which must now be asked is whether the program is likely to be effective in other settings and with other areas, cities or populations. Steckler, Allan McLeroy, Kenneth R. (2007) quoting Campbell D.T. Stanley J.C. (1966) argues that internal validity is as important as external validity. We have thus gone a bit further so not only is it important to know whether the program is effective, but also whether it is likely to be effective in other settings and with other areas, cities or populations. This would accordingly lead to the translation of research to practice. It must be submitted that as with internal validity, the fact that there was never any opportunity to repeat the study, there was never any opportunity to examine whether the same or similar results would have been obtained in beat 144 over an equivalent period some time using the same policing tactics and or in any other beat for that matter. It cannot therefore be validly concluded that the Kansas City Gun Experiment would be as effective in any other beat area. THE CLARITY OF POLICY IMPLICATIONS OF APPLYING THE RESULTS IN POLICING The policy implications of applying the results of the Kansas City Gun Experiment are arguably fairly clear. The most important conclusion is that police can increase the number of guns seized in high gun crime areas at relatively modest cost. Directed patrol around gun crime hot spots is about three times more cost-effective than normal uniformed police activity citywide, on average, in getting guns off the street[1]. Policing bodies around the United States can conclude that although the raw numbers of guns seized in a particular beat may not be impressively large, the impact of even small increases in guns seized in decreasing the percentage of gun crimes can be substantial. If a city wants to adopt this policy in a high gun crime area, this experiment proves that it can be successfully implemented[2]. It is also clear from the Kansas City gun experiment that a focus on gun detection, with freedom from answering calls for service, can make regular beat officers working on overtime very productive. REFERENCES Bachman, R. and Schutt, R, K, (2007). The Practice of Research in Criminology and Criminal Justice. 3rd Edition , Sage Publications Inc. Sherman and Rogan (1995), â€Å"The Kansas City Gun Experiment†, National Institute of Justice, Office of Justice Programs, U.S. Department of Justice. Sherman and Rogan (1995), â€Å"The Kansas City Gun Experiment†, National Institute of Justice, Office of Justice Programs, U.S. Department of Justice Sherman and Rogan (1995), â€Å"The Kansas City Gun Experiment†, National Institute of Justice, Office of Justice Programs, U.S. Department of Justice Kimberlin, Carole L., and Winterstein, Almut, G. (2008),â€Å"Validity and Reliability of Measurement Instruments used in Research† Research fundamentals, Am J Health-Syst Pharm—Vol 65 Dec 1, 2008 Grimes, David, A. and Schulz, Kenneth, F. (2002), â€Å"Bias and causal associations in observational research† Grimes, David, A. and Schulz, Kenneth, F. (2002), â€Å"Bias and causal associations in observational research† Campbell D.T. Stanley J.C. (1966), Experimental and Quasi Experimental Designs, Chicago, Ill: Rand McNally; 1966. 8. Steckler, Allan McLeroy, Kenneth R. (2007), The Importance of External Validity, Am J Public Health. 2008 January; 98(1): 9–10. doi: 10.2105/AJPH.2007.126847 9. Sherman, Lawrence W., and R.A. Berk, (1984), â€Å"The Specific Deterrent Effects of Arrest for Domestic Assault,† American Sociological Review, (49)(1984):261–272. [1] Sherman, Lawrence W., and R.A. Berk, (1984), â€Å"The Specific Deterrent Effects of Arrest for Domestic Assault,† American Sociological Review, (49)(1984):261–272. [2]

Friday, October 25, 2019

Banking Sector Essay -- Financial System, Bank Runs

Traditionally, the existence of bank runs was a very frequent phenomenon in Europe during the 19th century. It was mostly seen in the emerging countries where the boeotian level was low. Kaminsky and Reinhard introduced a new concept in the banking sector called twin crises. The twin crises concept started since 1980 and occurs when both currency and banking crises take place simultaneously. This harmful phenomenon anticipated a significant recession after the 1933 when the Federal Reserve System imposed the concept of Deposit Insurance in the US. In the same directions all governments around the world tried to find ways to prevent crises. Several schemes like the Suspension of convertibility and penalty on short-term deposits followed the implementation of Deposit Insurance scheme. As a result of the establishment of these new schemes, policy makers and bankers focused their attention and criticism on the recent concept of moral hazard that came into surface during the savings and loan crisis of 1980. In order to begin analyzing the macroeconomic concept of bank runs I have to mention that there are literally two general views. The first group of economists such as Diamond and Dybvig (1983), Chang and Velasco (2001) and Cooper and Ross (1998) supports that bank runs are self-fulfilling prophecies, unconnected to the real economy of the country. Under this view, if agents do not expect a bank run to take place, the risk-sharing mechanism of the banking sector operates beneficially and an efficient allocation of resources is achieved. On the other hand, if the agents believe that a bank run will occur then they will all have the tendency to run and withdraw their money as soon as possible to avoid losing them. The second appr... ...mplementing the 5 regulatory policies as I mentioned above at the end of the first part of this paper. The Diamond and Dybvig model clearly explains why these five policies were introduced. Firstly, the suspension of convertibility was introduced in order for events like the example of the bad equilibrium be avoided and keeping the bank alive. On the same line was introduced the tax on short-term deposits as well in order to disencourage depositors to withdraw their money early. In addition, the FCDI scheme was implemented in order to remove the fear of a bank run from the investors to eliminate the occurrence of panic within the financial market. Furthermore, the ICDI scheme was introduced to eliminate the concept of moral hazard that is caused by FCDI. Finally, the capital requirement scheme was established in order to keep the banks more liquid and solvent.

Thursday, October 24, 2019

Uses and Gratifications Theory

USES AND GRATIFICATIONS THEORYThe uses and gratification perspectives takes the view of the media consumer. It examines how peopleuse the media and the gratification they seek and receive from their media behaviors. Uses andgratification researchers assume that audience that audience members are aware of and can articulatetheir reasons for consuming various media content.HistoryThe uses and gratifications approach has its roots in the 1940s when researchers became interested inwhy people engaged in various forms of media behaviour, such as radio listening or newspaper reading. These early studies were primarily descriptive, seeking to classify the responses of audience membersinto meaningful categories. For example, Herzog in 1944 identified three types of gratificationassociated with listening to radio soap, operas: emotional release, wishful thinking and obtaining advice.Berelson in 1949 took advantage of a New York news paper strike to ask people why they read thepaper, the respon ses fell into five major categories: reading for information, reading for social prestige,reading for escape, reading as a tool for daily living, and reading for a social context. These earlystudies had little theoretical coherence; in fact many were inspired by the practical needs of newspaperpublishers and radio broadcasters to know the motivations of their audience in order to serve them moreefficiently.The next step in the development of this research began during the late 1950s and continued during intothe 1960s, in this phase the emphasis was on identifying and operationalizing the many social andpsychological variables that were presumed to the antecedents of different patterns of consumption andgratification. Wilbur Schramm in 1954 asked the question, `what determines which offerings of mass communicationwill be selected by a given individual? ‘ the answer he offered is called the fraction of selection, and itlooks like