Thursday, October 31, 2019

Betty Friedan, The Feminine Mystique Research Paper

Betty Friedan, The Feminine Mystique - Research Paper Example The socio-political environment prevailing in the United States of America was systematically making the women feel contented with their household duties, thereby giving way to an unnoticed and unrecognized sense of discontent, apathy and unhappiness. Thereby, Betty Friedan’s book The Feminine Mystique is indeed credited with bringing to fore this unrecognized marginalization of women (Horowitz 36). Hence, The Feminine Mystique indeed happened to be a work that revitalized the Women’s Liberation Movement. The book, The Feminine Mystique was the outcome of the conclusions drawn by Betty Friedan, when she attended her college’s fifteen year reunion. In a survey conducted by Betty Friedan in this reunion, she realized that a majority of her classmates were abjectly dismayed and unsatisfied with the role of an idealized American housewife, heaped on them by the dominant social, cultural and gender expectations. Actually it was this survey that made Betty Friedan recognizes the fact that a post War social environment was positively nudging women to adapt to the roles of mothers and housewives. Motivated by this conclusion, the subsequent research conducted by Betty Friedan confirmed her worst fears regarding the state of women in the post War America. Immediately after its publication, The Feminine Mystique turned out to be a number one bestseller, as it happened to be an ideological work that tried to recognize, unravel and define an array of issues faced by the women in the post War world, which hitherto remained ignored, sidelined and neglected (Scanlon 94). This book brought to fore the fact that confining women to the roles of mothers and housewives not only made them lead an unsatisfied and frustrated life, but this trend also had larger implications for the American society. In that context, The Feminine Mystique was a groundbreaking work in the sense

Tuesday, October 29, 2019

Texas Revolution of 1836 Essay Example | Topics and Well Written Essays - 1000 words

Texas Revolution of 1836 - Essay Example Interested readers and researchers can also trace down the sources in order to ensure the credibility of the sources. The use of original sources is the fact that adds towards the credibility of the books. Roger Borroel, the author of the book, is a Vietnam veteran who was in the 101st Airborne Division during the year 1968-1969. He graduated from Purdue University in 1980. He has conducted significant research and published over 15 works on the Texas Revolution of 1836. From the sources presented in the book, it can be said that the author has done enough research on the subject matter and he has presented it in a convincing manner. It can be inferred that he retains enough qualifications for writing this book. The Structure of the Book The book is written as a concise historical perspective and in order to add concreteness to the information given in the book, the author has also added numerous illustrations in the book. There are twenty-four pictures, drawings and other illustrati ons and the book is stretched over 238 pages. There is a comprehensive index provided at the start of the book which makes it easier for the reader to find out any specific piece of information. Since there are a number of events that are relevant with the Texan revolution, therefore it is normally difficult for readers to find out a specific event in the history, however this book is written in a chronological sequence of events which makes it significantly easy for the readers to find out a relevant event or document. The sources used in the book are; official governmental papers, reports, intelligent opinions, diaries, reports, and noted personal observations. This book includes translated Mexican Army documents which were never translated before in order to let the reader have an insight regarding the war. This is one of the factors that add to the credibility of the books therefore the title of the book also suggests that the historical perspective is based on original sources. Book Summary The Texan revolution was a conflict that grew into a war between Mexico and the settlers in the Texas which was a part of a state of Mexico. The war officially started from 2 October 1835. The events that triggered the conflict between the government and the American settlers in Texas were the series of legislative changes brought by the Mexican President Antonio Lopez de Santa Anna. He modified the Mexican constitution and turned it into a more centralized constitution which empowered the government and endangered the rights of the citizens. One of the most prominent factors was the Siete Leyes (Seven Laws) passed in 1835 by Antonio Lopez de Santa Anna. These laws modified the very basis of the structure of the Mexican government. The first law provided that citizenship will be granted to those who were able to read and had a specified annual income except domestic workers. The second law gave power to the President to close Congress and overpower the Mexican Supreme Court of Justice of the Nation. Military officers were also disempowered from assuming this office. The third law provided for a bicameral Congress of Deputies and Senators which would be elected by the government. The fourth law specified the manner of selection of the President and Vice president. The fifth law specified the manner of selection of the 11 member Supreme Court. The sixth law holds significant important as this law was responsible for the increased centralization

Sunday, October 27, 2019

Glaucoma Image Processing Technique

Glaucoma Image Processing Technique Team 19 Members 40102434 Andrew Collins 40134357 Connor Cox 40056301 William Craig 40133157 Aaron Devine We have been tasked to develop a system that through image processing techniques would be able to detect glaucoma. This required us to enhance our knowledge in how to apply pre-processing, segmentation, feature extraction and post-processing on a set of given images to be able to produce a classification. Glaucoma is an eye condition where the optic nerve, which is the connector from your brain to your eye becomes damaged.   This can lead to a complete loss of vision if it is not detected and treated early on.   This is caused by when fluid in the eye cannot be drained effectively which builds pressure and then applies excessive pressure on the optic nerve. Detecting glaucoma normally is a very time consuming and expensive process because it requires a trained professional to carry out the research.   The advantages of automating this process is that it frees up that professionals time to carry out other duties. The system is going to be tested methodologically during the creation of the assignment, to help us decide what would be the best parameters to use to help increase the detection rate of glaucoma. System The way we tackled this assignment is we made a system that takes image sets and converts them into data sets which trains and tests them through our classification process.   The system assigns the data set to either being healthy or having glaucoma detected.  Ãƒâ€šÃ‚   Training goes through the following stages in this order: Pre-processing. Segmentation Post-Processing. Feature Extraction Classification. Methodology For us to decide what would be the best choice of techniques for each stage of the system we are going to be using the a set methodology to standardize our selection process.   The aim is to maximise the system to try and get it to yield the maximum correctness it can achieve at each stage so when it reaches the classification stage it would provide the most accurate result. The best way we are going to measure the correctness of the system is running a testing/training cycle for each parameter being changed and put into a table and comparing them to select the best result. Brightness Enhancement In our system, I have implemented Automated Brightness Enhancement (ABE). ABE is used to normalise an image so the images mean gray value is equal to 127 or (255/2). The image below illustrates what the results look like. As you can see in the table above, the accuracy or our system significantly decreases when ABE is enabled. Therefore, for the good of the systems accuracy, we will disable ABE in the system. As for why ABE damages the accuracy, it likely destroys some data within the images that have a more dynamic range than the one shown above. This would result in some gray levels being 0 or 255. Accuracy significantly falls here. The reason for this is that ABE is causing the classifier to return positive for glaucoma for more images than it should, in turn, improving accuracy due to class ratio imbalance. Contrast Enhancement Our system implements three types of contrast enhancement, histogram equalisation, Automated Linear Stretch (ALS) and the Power Law. These three topics are covered extensively in the lecture slides, so in the interest of keeping the report concise, I wont discuss them in depth here. Ultimately, only one of these techniques will be picked. Automated Linear Stretch Histogram Equalisation Power Law This example shows an error. The system doesnt contain an automated way to find the value for (gamma) in each image. So well test every value of gamma from 0.0-2.0 in increments of 0.1 to see if any of our results provide a higher accuracy than when it isnt enabled at all. 0.6, highlighted in green, shows that the accuracy is 88%,the image below shows power law being applied when there is an error. In the image above, the original image is the one on the left, and the processed image is on the right, and their corresponding histograms are underneath each, respectively. It would appear that the power law has actually made the dynamic range of our image worse. Examining the segmented binary image below could explain why the accuracy has risen to 88%. From this image, we can see that reducing contrast at the higher end, which seems to be what the error is doing, is allowing the segmenter, which is set at its default of edge extraction with a = 1 and no post processing, to detect the veins and optic nerve ring within the eye within the image with a higher level of success. But why is this the case? it is due to the images background becoming more uniformed because of the reduction in contrast in the white end while not altering the veins much at all as they are darker/greyer. The reason values of y Summary From my tests, I have come to the conclusion that the best technique of the three is the Power Law. It was the only technique that improved our systems accuracy. My tests also suggest that high levels of accuracy are dependent on the successful extraction of data about the veins, which, as I discussed above, the Power Law is highly effective at. This theory makes even more sense when you consider that the other two methods, which significantly increased the dynamic range, did very poorly in comparison. Our system will benefit from using the Power Law, so from this point on it will be enabled. Noise Reduction Our system incorporates two kinds of noise reduction, those two being, Low Pass Filter and Median Filter. From examining our images, one would conclude that salt pepper and CCD noise is not present. To demonstrate this however, well need to see if the system gains accuracy when each technique is enabled. Low Pass Filter (LPF) As we can see in the table above, accuracy has significantly decreased. To illustrate this, here is what the original and processed histograms look like when the contrast enhancement is applied without the low pass filter. From the histograms, it would appear that low pass filter is actually removing some of the contrast enhancement. Low contrast seems to be mistaken for actual background noise, and when that happens, more distinct light and dark patches are created which in turn increases the dynamic range. Median Filter Similar to the low pass filter, the median filter is also removing some of the improvements made by contrast enhancement. Although it does appear that median pass filter is doing this to a lesser degree, as the accuracy is slightly higher here. Summary From our tests, we can conclude that both low pass filter and median pass filter only damage the accuracy of our system. LPF more so than MPF. It appears that the two actually undo some of the work done in contrast enhancement. As well as that, there isnt actually enough noise in the image used here to warrant the use of a noise reduction filter at all. After performing these tests, I decided to test my hypothesis, I tried applying the noise reduction filters before contrast enhancement to examine the results. The results were actually identical to the results from the earlier test. So what could that mean? Well, it would seem that noise reduction is actually removing some information/data from the images, which then limits the effectiveness of the segmenter. From this point on, noise reduction filters will not be used. Segmentation This is used to separate the image into a foreground and a background with key areas in the foreground being turned white and the rest black.   Our segmentation process involved using edge extraction and then automatic thresholding.   The first thing we do is apply the Sobel mask to the pre-processed image Its very important to use edge extraction because it helps show the boundaries of the eye and make the veins much more defined.   Right after that we apply automatic thresholding on the gradient magnitude image to get a binary segmented image. The class that we use to test which value to use is called SegmenterTest which will test the value of n within a range of -2.0 to 2.0 and increases the increments by 0.1 to see if the improved value increases the compared to a default value of n = 1.   From this we got the following values: The default system where the value of n=1 it produces a good accuracy of 88% so this is the value that we pass into our segmenter. This will allow more generic segmentation than what is possible with setting a manual threshold.   The thresholds that are going to be   in use are derived from the mean brightness of the pixels in the image raster and then adjusted by a standard deviation providing the best optional threshold for each image. To check if Sobels Mask is the best for using to do edge extraction we will now compare the results from using Prewitt mask edge extraction. What we found that using the prewitt mask edge extraction as part of our segmentation process is that it is more effective using the default value on the Sobel Mask n = 1.   The best accuracy that we got using the prewitt mask happens when we have n = 1 just like when we were using the sobel mask.   This allows us to reduce that the sobel mask is the best option for us to use the edge extraction during the segmentation process. Post-processing Through this image processing technique, the image is enhanced and is filtered by a mask. The process uses erosion and dilation to remove isolated noise pixels, fills holes and smooth boundaries. Using brightness based segmentation, post processing is used to clean up the thresholded binary image. However, this can make objects appear smaller or larger than the original size. We added the post processing techniques of closing and opening for our methods of erosion and dilation.   To test which value that we are going to use we tried a variety of combinations and got the following results. From what we gathered is that the best accuracy drops heavily when using any of the other post processing techniques were used. The image above has closing only enabled which produced the best accuracy from the post processing techniques however as you can tell by the image below which has post processing disabled it has much more detail.   It is for this reason we will have post processing disabled because we are then able to receive better accuracy from the images.   Post-processing did not have a positive result in the classification accuracy.   It does make it visually easier to see how the application was processing the images. Feature Extraction The purpose of feature extraction is to gather useful features and details out of segmented images by extracting the feature vectors using a technique called moments. Implementing the use of moments correctly is the foundation for the essential calculations performed during the analysis of an object. In our feature extraction class within our program we have decided that the following features of an object will be taken into consideration- Compactness, Perimeter, Position of Centroid and finally the Area of the object. Before we perform the calculations for these features of said Object we first had to implement the moments formula in Java. Once we have created the moment method in our class we will then be able to use this to calculate the feature vectors needed. Compactness The reason we want to get the area and the perimeter is so that we can use the values to calculate what is needed, that being Compactness, as it is a more uself shape description for our vision system to use.. Compactness can be calculated by squaring the perimeter and then dividing it by the area. private double compactness(BufferedImage image)   {   Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚   return Math.pow(getPerimeter(image), 2) / getArea(image);   } Above I have included the method that is called to calculate the compactness of the object, as you can see the calculation that was mentioned above is performed within this method. Perimeter We can get the object in questions perimeter is first calculated by first eroding the object and then we perform a calculation to receive the new objects area after erosion, after this we go onto calculating the difference between the new objects area and the initial objects area like so Perimeter = Original Area Eroded Area After this calculation is performed we are left with the perimeter of our object. private double getPerimeter(BufferedImage image)   { return getArea(image) -  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚   getArea(PostProcessor.erode(image));   } I have placed the method used to get the perimeter of the object above, as you can see the method is performing the calculation required for the perimeter, Original Area Eroded Area resulting in our perimeter. Centroid Position We can get the X Y coordinates of the centroid in the object by performing the calculation of M01 M10 private double [] position(BufferedImage image)   {   Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚   //calculate Centroid at M01   Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚   double i = Math.round((moment(image, 0, 1))/ moment(image, 0, 0));   Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚   //calculate Centroid at M10   Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚   double j = Math.round((moment(image, 1, 0))/ moment(image, 0, 0));   Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚   double [] Cij = {i, j};   Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚   return Cij;   } Above is the method we have developed to find the position of the centroid for our Object. As you can see in the code above this method is using the moment method to perform the calculations needed to find the centroid position of the object.   Area We must also find the area vector, to do this we must calculate M00, this can be performed using the moment method which was developed earlier. private double getArea(BufferedImage image)   {   Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚   return Math.round(moment(image, 0, 0));   } Above is a screenshot of the getArea method, this method calls upon the moment method and Math.round function to find the Area of our object. Classification Within our system which we have developed, we included the Nearest Neighbour function that is used to identify and recognise the training images we have supplied our system with. When we implement this feature in our system we get a variation of results depending on the value we set K to, we have included the results outputted by this function below for analysis Nearest Neighbour Function:  ·Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚   K = 1: o  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚   Accuracy: 62.50%  ·Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚   K = 3: o  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚   Accuracy: 87.50%  ·Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚   K =5: o  Ãƒâ€šÃ‚  Ãƒâ€šÃ‚   Accuracy: 56.25% As you can see in the above results from testing this function, the Nearest Neighbour Function provides us with the highest accuracy rate when using the value 3 for the K variable. This is due to the fact it can recognise the training images features. A disadvantage to this approach is that when changing the value of the K variable then this can alter the accuracy of the output as we can see when changing the value of K from 1 to 3, the accuracy increases greatly but once we change the value from 3 to 5 then the accuracy suffers and drops 30 points of accuracy. Summary: For this current group of images, the Nearest Neighbour function with the value K set to 3 is the best method used for classifying the object, this is because it returns the highest possible accuracy rate compared with other values of K such as 1 or 5, the accuracy rates for these values can be seen above.

Friday, October 25, 2019

The main paper -- Essays Papers

The main paper In response to the tragic events of September 11, 2001, there has been renewed interest in the creation of a national identification system, typically embodied in a "smart" ID card, as a component of a national counter-terrorism strategy. National ID cards have been advocated as a means to enhance national security, unmask potential terrorists, and guard against illegal immigrants. "The core issue can be expressed as a question: What actions should the federal government take to certify the identity of its citizens and other persons in its jurisdiction, and what role should computing technology play in that process? To deal with questions like this, lawmakers, leaders, and government agencies often begin the policymaking process by seeking the advice of expert panels"(CSC4735 Forum). In overall, there are many opinions that people agree with the national ID card; also many of them disagree with it too. In my opinion the national ID card system would not protect us from terrorism, but it would create a system of internal passports that would significantly diminish the freedom and privacy of law-abiding citizens. Larry Ellison, head of Oracle Corporation, the California based software company, has called for the development a national identification system and offered to donate the technology to make this possible. He proposed ID cards with embedded digitized thumbprints and photographs of all legal residents in the United States (Black). In recent ideas, ID cards have been linked to national registration systems, which in turn form the basis of government administration. In such systems the ID card becomes merely one visible component of a much larger system, with the advent of magnetic stripes and microprocess... ...debates over health are reform, the Clinton Administration also constantly stressed that it was opposed a national identifier. In 1999 Congress repealed a controversial provision in the Legal Immigration Reform and Immigrant Responsibility Act of 1996, which gave authorization to include Social Security Numbers on drivers' licenses. For conclusion, there are some reasons that discussed in this paper. I think the national ID card system would not protect us from terrorism, but it would create a system of internal passports that would significantly diminish the freedom and privacy of law-abiding citizens. So now imagine this. A police officer stops you in your car. Scan your license, matches your fingerprint with a central database and has immediate access to a plethora of information, including whether you are on a terrorist watch list. What would you think?

Thursday, October 24, 2019

Black Genocide Essay

â€Å"Black Genocide in the 21st century† also called â€Å"Maafa† is an anti-abortionist documentary made in 2009 that speaks about the relevance of birth control, White America and Black America, Planned Parenthood and how it was established, also, the conspiracy behind abortion. The movie also deeply discusses the argument between abortion being genocide and it specifically targeting African Americans. Black Genocide was a very intriguing and interesting piece of material that filled my knowledge with much more than I had intended it doing. Before watching this video, I knew a little about abortion but not about the black genocide part. You see, I knew that abortion was a way for the government to obtain legal rights to abort children who weren’t able to be cared for but I didn’t know the government was using abortion as a way to limit the black population. I also knew that African Americans were having a lot of abortions and there were, and still is a lot of abortion facilities but never put together the idea of how they were getting access to this information, furthermore, the connection between eugenics and genocide. During the film, I learned so much information that is disgusted me and changed many of my views toward abortion and other things. I learned that in the early 1800’s, Americans feared retribution and resurrection because slavery was supposed to have ended. Intermarriage also lead to the loss of international purity and for that, they had a plan of colonization. Colonization was an affect that took place, and caused African Americans to be sent back to Africa. After the colonization, the new philosophy was established and was called â€Å"eugenics†, the perfect solution to what was known as â€Å"negro dilemma.† I also learned that Eugenics believed that Africans were inferior and without guidance, they couldn’t make it. Margaret Sanger was the founder of the â€Å"American birth control league† and was successful for promoting abortion and birth control. After watching the 21st century of black genocide, I wanted to know more about the situation with the NAACP and why the government still hasn’t publicly announced the conflict between the protesters and there undercover targets. I’d also like to know more about the positive and negative eugenics and why White America was considered positive eugenics when it was used to try and dominate the black parts of America and used as a companion to exterminate African Americans. I’d also like to know more about Planned Parenthood and to see if the facilities were still being targeted in minority places. I’d also like to know more about White America and the Planned Parenthood meetings, and also if Planned Parenthood groups still targeted low poverty neighborhoods of different race, such as Caucasians.

Wednesday, October 23, 2019

Compae

Today the United States of America has a very large constructed government that has been influenced through the ages. The Greeks romans and Judeo-Christian traditions had the biggest impact on our government today though due to the way they began their governments. Greco-Roman and Judeo-Christian cultures had similar ideas about laws and individual duties that have influenced us today. The Greeks just like United States use three branches of government, they carry out laws through their executive branch and the legislative branch passes laws.The leader was chosen by lot today the president is chosen by popular vote. Judeo-Christian, Greek and Roman cultures also had differences in their views of law, reason and faith and individual responsibilities. In Judeo-Christian law, reason and faith are based on the Word of God. They believe in only one God, saying he is the creator of all things. Greco-Roman beliefs dealt with more logic. Philosophers like Plato and Aristotle believed in a su preme God but this was because of mythology where the people preferred to create their own gods.As far as law and reason, in Greece; philosophy ruled and in Roman the opinion of Caesar ruled. Greeks viewed law as something that was developed by common sense and over time through civilized logic and experience. Jews and Christians viewed laws as coming from a god. The duties of individuals under Judeo-Christian view is to love the Lord your God with all your heart and all you mind and all your soul and to love your neighbor as yourself. Greco-Roman is that only Roman citizens are to be considered people and treated as people.Greeks considered those outside of Athens to be ignorant and not worth their time. These three cultures have influenced the way we think about laws even now today. We use the Judeo-Christians ideas about individual worth, ethical controllers, and the need to fight injustice. These ideals continue to be extremely important to United States government still today. This all taught us that representation and citizen participation are important features of democratic governments around the world. Romans were the first ones to give the world an idea of a republic.They had the first written legal code and idea that this code should be applied equally and impartially to all citizens. On the other hand the Greeks invented the first democracy in the ancient world. All in all these three cultures of Greek, Roman, and Judeo-Christian had one thing in common; they all influenced our government today. Even though all of them are extremely different and have different ideas based on law, faith, and tradition they are all a lot alike. They are all influential in positive ways and we owe it all to them for creating the government we have today in the United States.