## Methods Requirements

**Methods Section (1-2 Pages)**

- How did you collect the data?
- Describe the methods used (include surveys, interview questions, observation sheets, what you examined questionnaires, interviews, etc. in appendix).
- Address why these methods were appropriate for your question
- Explain the timeline of your data collection
- Explain how you analyzed the data but
__this section should not have any results in it__!!!- If there were any errors or challenges you ran into, you can discuss them here as well and how they were resolved

- Passive Third Person Voice
- Times New Roman, Size 12 font

## Sample Methods Sections

## Example #1

*(From a paper with the Essential Question: How are the attorneys utilizing their time and resources to benefit clients, while benefiting the firm?)*

In order to better understand how such conflicts impact a law firm’s financial well-being several variables were analyzed from my firm’s closed 2016 cases. I then chose 33 cases to incorporate variety of case type which is the cause of action for the case. These were categorized by: disability discrimination, sexual harassment, gender discrimination, FEHA violations, pregnancy discrimination, age discrimination, retaliation, litigation, and other various violations. After choosing the cases and dividing them, I analyzed the following variables: litigation which is the process of taking legal action or suing, date of retainment which is the date in which the client and attorney begin their relationship, date of settlement which is when the case closes, date of no settlement, and how the person heard about the firm. It is important to note that I needed to specifically look at closed cases or ones that already settled so I could calculate how long each case took from the date retained to date settled. I was able to find all of this information by searching through the firm’s digital and actual files of the closed cases. Upon analyzing these variables I provided an average of the following: average length of time to reach a settlement and average settle payment amount by type cause of action.

The timeline of this research spanned approximately half of the time spent at internship. This analysis took place over the course of seven weeks. For the first three weeks I worked with my mentor to choose the cases. Although I was mostly allowed complete freedom to select the cases, I did need to consult with them to ask for recommendations to make sure all categories were equally represented. All of this data was then organized into a master google spreadsheet and separated into individual spreadsheets based off case type in the fourth week. During the last three weeks of data collection, I analyzed the data and calculated the settlement dates to try to find a correlation.

The purpose of this essential question was to allow me to conduct research on the firm’s financial well-being, and to figure how the attorney’s can benefit clients and themselves. Analyzing past cases seemed to be the best method since they were actual cases from our firm, and it was easy to correlate settlement times and types of cases. This information had never been analyzed in this way, and the firm was very appreciative. Although these 33 cases were merely a sample out of the hundreds of cases the firm goes through in a year. Also, categorizing the 33 cases into subcategories was merely an organizational strategy to allow me to easily calculate the average amount of time and average gross settlement. I was able to calculate the averages by using the google sheets averaging tool. I then took this data and formulated two graphs. One shows the average amount of days each case type took (Appendix-Figure 1) and the other shows the average amount of money each case type earns (Appendix-Figure 2). This allowed me to easily interpret the averages and figure out which case types or causes of action benefit the firm the most.

Unfortunately, while calculating these averages I ran into a bit of an error. I realized that the amount of cases in each subcategory varied, therefore rendering the averages. However, this was easily fixed by calculating standard error and standard deviation. Standard Error allows you to get an accurate overview of the standard deviation which is a quantity calculated to indicate the extent of deviation for a group as a whole. Standard error is a statistical term which simply takes into consideration the difference in sample size. This process was important because not all categories had the same amount of cases in them, therefore the data could be skewed, and standard error is a sum that takes that into consideration, and gives a more accurate average.

## Example #2

*(From a paper with the Essential Question: What are the differences in accuracy and precision between the traditional survey method and the laser scanner survey method?)*

My question revolved around comparing the the precision and accuracy of different survey methods. In order to collect the data I needed to compare the precision and accuracy of the two methods, I had to perform 2 surveys of the same points: one with the Total Station and one with the Laser Scanner. I decided to use the back of the WHPacific building as my survey location. I placed paper targets on 8 different spots along the wall and numbered them. 8 targets were needed in order for the Laser Scanner to make an accurate image of its surrounding area.

As seen in the image in Appendix A, paper targets are simply pieces of paper with a checkerboard pattern. The laser scanner software recognizes these targets and determines their center point, providing the surveyor with the desired point out of the point cloud. Although these targets are designed for the Laser Scanner software, they are just as effective when using the Total Station, because the Total Station depends on the surveyor, not the software, to find the point. In addition to placing the paper targets I had to set 2 control points in order to create an accurate survey. I set CP 100, which was the point on which I set the Total Station and the Laser Scanner’s white sphere. I also set a backsight, CP 101, in the parking lot approximately 191.82ft away from CP 100. The farthest target was less than 66ft away so the backsight was placed more than far enough away to stay in tolerance.

I first surveyed my points using the Total Station on Thursday October 13 (73° /62° accuweather). As seen below to the left, the total station is a manually controlled survey instrument that uses a laser to determine distance and angles between points. I placed the total station on a tripod directly over CP 100and use the small sight at the bottom of the Total Station to make sure I was accurately over the marked control point. I then shot all of the paper targets and recorded my shots in the data collector. When I finished my survey, I checked back in with my backsight to make sure I was in tolerance. I conducted 3 surveys and made sure to move the tripod off the control point each time and set it up all over again to get a more realistic precision and accuracy. In the office, I downloaded the points from the SD card in the Total Station and imported them into AutoCAD.

I then surveyed using the Laser Scanner on Tuesday October 18 ( 74° /63 accuweather). As seen to the left, the laser scanner is a survey instrument that takes a 360 degree shot of its surroundings in order to create a “point cloud”. There are many different kinds of laser scanners, but WHPacific uses a FARO Laser Scanner Focus 3D. This is a stationary laser that is set up on a tripod, much like a Total Station. However, once the Laser Scanner is set up, there is an entirely different survey process. First, the surveyor can choose the resolution, or density of points. The FARO is capable of taking up to 976,000 points per second (faro). I used a resolution of 26 million points with a survey time of 8 minutes. Once the resolution is set, the surveyor presses the “Start Survey” button and waits for the laser scanner to generate a point cloud. However, the laser scanner software (Recap), needs at least 2 surveys of the same area from different angles to be able to stitch together one cohesive image. Consequently, I needed 16 minutes for one complete survey, with the Laser Scanner at 2 different angles. After collecting data in the field, I used Recap to stitch both surveys together into one image. I did this by selecting both surveys and then clicking on the paper targets and sphere, which can be recognized by the software. The software then creates one cohesive image and renders a 3D model.

After surveying and collecting all of the points, I put the raw points in a Google Sheets document in the PNEZD format (Point, Northing, Easting, Elevation, Description) and imported the csv’s to AutoCAD.