Jump to content

Here's a great How to Article which if you are interested in


Ace844

Recommended Posts

Here's a great How to Article which if you are interested in doing research may help you get started...

Hope This Helps,

ACE844

This Document

SummaryPlus

Full Text + Links

·Full Size Images

PDF (179 K)

External Links

Actions

Cited By

Save as Citation Alert

E-mail Article

Export Citation

doi:10.1016/j.injury.2005.06.051

Copyright © 2005 Elsevier Ltd All rights reserved.

REVIEW

Designing, conducting and reporting clinical research.

A step by step approach

Beate P. Hanson, a,

aAO Clinical Investigation and Documentation (AOCID), Clavadelerstrasse, CH-7270 Davos Platz, Switzerland

Accepted 21 June 2005. Available online 25 August 2005.

Summary

There are five major steps that one must navigate successfully to take a study idea and turn it into a publication that may have an impact on clinical practice. These steps include developing the study question(s), developing the study plan, implementing the study plan, reporting the results and submitting the manuscript(s) for publication.

This review takes each of these steps and expands on its important components. More detail is given for steps one, two and five.

Furthermore, the review is augmented with tables and checklists that may serve as tools in the planning and execution of a clinical study. Though it does not address every detail for each of the steps discussed, readers of all experience levels should find it a useful tool in the planning, execution and reporting of their next clinical study.

Keywords: Clinical research; Specific aims; Study protocol; Study methods; Publication

Article Outline

Introduction

Step 1: developing the study question

Defining the study question

Refining the study question

Converting your study question(s) into a specific aim(s)

Step 2: developing the study plan

Study design

Randomized controlled trials

Cohort studies

Case–control studies

Case-series

Blinding

Prognostic variables

Outcome measures

Complete follow-up

Statistical analysis

Step 3: implementing the study plan

Step 4: reporting the results

Step 5: submitting for publication

Conclusions

References

Introduction

Being careful is what designing and conducting research is all about; taking care in articulating the study question, in choosing the correct study design and in ensuring that data are carefully extracted, recorded, managed and analysed. In short, those who conduct research must be careful when they imply that what is found in their study is the truth in the surgical universe.

The purpose of this review is to help both clinicians and researchers to develop an overall plan for their future clinical research, by discussing the following five important steps:

(1) developing the study question(s);

(2) developing the study plan;

(3) implementing the study plan;

(4) reporting the results;

(5) submitting manuscript(s) for publication.

The focus of the review will be on steps one, two and five. Details of steps three and four will be reserved for another publication.

Step 1: developing the study question

Developing a study question that is destined for success is based on three important phases: (a) defining the study question(s); (:) refining the study question(s); © converting the study question(s) into a specific aim(s).

Defining the study question

The first phase in defining a study question is to take an initial idea and narrow it down to an answerable, or testable, question. In determining whether your initial research question is answerable, you must ask yourself, “Is my research idea do-able the way that I’m currently proposing it?” In order to make progress, this question must be answered honestly before getting started. This is best facilitated by bouncing ideas off mentors and colleagues, through brainstorming and group discussions, before settling on a preliminary question that you can refine further.

Refining the study question

Once a study idea has been defined, it needs to be refined into an answerable study question. There are four main factors to consider that will help you refine your study question further. A simple way to help you to frame a clinical question is to use the acronym PICO (patients, intervention, comparison and outcome). It is a good idea to put your idea on paper using the PICO method (Table 1).

Table 1.

Example of PICO method for an orthopaedic trauma study question Patients What patient group? Young active adults ages 18–40 years old. Tibial pilon fractures (AO 43-A1, 43-B2.2, 43-C3)

Intervention or procedure What surgical procedure or implant? Plate osteosynthesis

Comparison What treatment is being compared? External fixation

Outcomes In what outcomes are you interested? Length of hospital stay

Time to weight bearing

Complications

Functional and quality-of-life outcomes

Once you have put your idea on paper, you can then consider the novelty, feasibility and ethics of the study question, early in the stages of development, to ensure that the project will succeed. Some of the important steps in accomplishing this include conducting a detailed literature search, creating a team of collaborators, writing a draft of your specific aims, doing some initial sample size calculations, drafting a timeline, applying for funding and determining when you will submit an application to the IRB (Table 2).

Table 2.

Important issues to consider when refining your study question Consideration Ways to address it Ways to improve it

Is this study novel? Conduct a literature search of your topic in PUBMED and other literature resources, such as the AO's OTD and Evidence Summaries in the Knowledge Portal. Look at Conference Proceedings to see what has been done but not yet published Modify the study question to include some novel aspect. Do something better than it has been done before (e.g. study design, more patients, better control, longer and more complete follow-up)

Does my team of collaborators have adequate expertise to carry out this research? Schedule meetings with study collaborators during the early stages of refining your study question to identify potential gaps in knowledge or skills Seek outside consulting

What are your objectives? Is the scope of this study manageable? Write down and discuss your specific aims with study collaborators Modify the aims, retain those that are most significant

Can I enrol enough willing, consenting participants meeting my inclusion criteria to make meaningful inferences? Determine the likely number of patients you will see in a given time period at your hospital or facility. Critically review results of previous studies for participation rates and prevalence of outcomes and treatments of interest. Do a power analysis to determine necessary number of subjects Broaden inclusion criteria, length of enrolment and number of investigative sites to increase sample size

Do I have the time to see this study from start to finish? Draft a timeline Consider an alternative study design

Do I have enough financial support to address this study question adequately? Draft a budget plan Apply for funding

Is this study ethical? Discuss with your collaborators and Human Subjects Division. Submit an application to the human subjects review board at your institution Modify the study question or study design

Converting your study question(s) into a specific aim(s)

The most valuable of the above-mentioned steps, which ultimately will drive your study plan, are the specific aims of your study. The aims of a study should be specific and hypothesis-driven. It is common for a study to have between two and four specific aims that are components of an overarching research question. It is common to have one or two primary questions that you would like to answer and another one or two secondary questions you would be interested in exploring. Those questions that are of primary interest are commonly considered primary aims. The rest of your protocol is centred on your primary aims, including your sample size calculations and data analysis. It will be necessary that you have adequate power (e.g. a large enough sample size) to answer the primary aims. On the other hand, you do not necessarily need to power your study to answer your secondary aims. Therefore, it is recommended that questions that are of secondary interest, or may require a sample size that you cannot obtain, should be secondary aims. These specific aims will provide the cornerstone for developing your study plan. Table 3 is an example of how you might craft a set of specific aims.

Table 3.

Example of specific aims for a randomized controlled trial comparing the Locked Compression Plate (LCP) to standard plates in the treatment of tibial pilon fracturesa Specific aim

Primary aim

1 To compare the incidence of deep infections

2 To compare functional outcomes

Secondary aim

1 To compare the incidence of ankle osteoarthritis

2 To compare bony union by measuring

Incidence of delayed union

Incidence of malunion

a Typically, specific aims have more detail than what is presented in the table to include how one would measure the outcome and the time frame of interest.

Step 2: developing the study plan

Once the specific aims have been established, you can begin to develop your study plan. The study plan is best developed in the following two stages: (a) study outline and (B) study protocol. The purpose of the study outline is to provide a framework for the basic elements of the proposed study, and should be one to two pages in length. Furthermore, the study outline can serve as a short proposal for your idea, that you might share with colleagues, potential co-investigators, funding sources, etc., before developing the study protocol. When applying for funding through a governmental organization, the outline is typically called a letter of intent (LOI). The basic components for this outline are listed in Table 4.

Table 4.

Checklist for study outline Specific aim(s) √

Background and significance √

Expected outcomes √

Time frame √

Methods and brief research plan √

Study design

Subject inclusion and exclusion criteria

Demographic, predictor and outcome variables

Statistical issues (hypotheses, sample size and basic analysis plan)

Participants √

Resources and budget √

Research site √

The study protocol is an extended version of the outline and should contain as much detail as possible (Table 5). This protocol provides the main framework for the study justification and operations. It will be submitted to the Institutional Review Board (IRB) and any sponsor, or future funding organizations, if applicable. A well thought out protocol is a recipe for success. Time spent ‘working out the kinks’ and creating a detailed written plan will make for an efficient and successful research study. The following sections discuss important aspects of your study plan that will ultimately end up in your study protocol.

Table 5.

Outline for a study protocol Section Purpose

1. Specific aim(s) What aims will the study address?

2. Background and significance What is known about the subject and why are these aims important?

3. Methods

Study design What study design will best answer the question given the various limitations?

Subjects Who are the subjects that are to be included?

Selection criteria How will the subjects be selected and recruited?

Sampling

Intervention What is the treatment or intervention?

Patient enrolment and data collection How will patients be recruited and enrolled?

How will data be collected?

Measurements What measurements will be made?

Predictor variables What instruments or techniques will be used to measure them?

Potential confounding variables Are these valid, reliable and responsive?

Outcome variables

Quality control and data management How will the data be input and managed?

Compliance How will compliance and follow-up be ensured?

Follow-up

4. Statistical issues

Sample size How large will the study need to be?

What analysis will I do? How will the data be analysed (descriptive and analytical statistics)?

5. Timetables and organization What is the timeframe for starting and finishing the clinical trial?

6. Ethical considerations

Safety, privacy and confidentiality How will safety, privacy and confidentiality be handled?

Informed consent/institutional review

Study design

The age of evidence-based medicine has arrived. Now more than ever, you need to think about the study design before committing time to research that you will ultimately want to publish. Choosing an appropriate study design is critical to your ability to address the specific aims of your study (Fig. 1).

(8K)

Figure 1. Study designs.

There are two main categories of comparative study designs: experimental (i.e. randomized controlled trial) and observational (i.e. cohort and case–control). Descriptive designs, such as case-series, are also informative in certain situations, but have significant limitations when attempting to determine treatment superiority. In the hierarchy of study designs, the RCT provides the strongest evidence for safety and effectiveness (Fig. 2).

(37K)

Figure 2. Hierarchy of evidence provided by different study designs.

Randomized controlled trials

Randomized controlled trials are characterized by:

• Random assignment of intervention, or treatment, in which a group of patients are randomly assigned either to an experimental group to receive a treatment such as surgery, or to a control group (the control group might receive nothing, placebo or an active alternative).

• Minimizing confounding variables (known and unknown). A confounding variable is both associated with the exposure of interest (e.g. treatment) and is a risk factor (or prognostic factor) for the outcome.

• Offering the most solid basis for an inference of cause and effect, compared with the results obtained from any other study design.

When employing randomization, it is important to keep treatment group assignments unpredictable over the course of a study. In other words, a participant's treatment allocation should not be revealed until he/she has been officially enrolled. This is known as concealment. Furthermore, the randomization should occur as late in the study as possible. This helps to prevent the bias that can arise when either caregivers, or patients, delay enrolment until they think that the chances are better of their receiving a desired intervention.4 This ensures that factors influencing eligibility, or consent to participate, are not disproportionately divided into treatment groups. The most popular methods for allocation concealment include:

• Having a central study office that performs the randomization and is telephoned upon participant enrolment.

• Using sequentially numbered, sealed, opaque envelopes that contain treatment group assignments.

RCTs that use non-concealed randomization are known as Quasi-RCTs. These are studies in which the allocation of participants to different forms of care is not truly random, for example, allocation by date of birth, day of the week, medical record number, month of the year or the order in which participants are included in the study (e.g. alternation). This type of allocation is more prone to selection bias.

Sometimes patients in a clinical trial are assigned to one treatment group, but for a variety of reasons, receive the other treatment. When this occurs, subjects should be analysed as if they had completed the study in their treatment groups, which were formed by randomization. This is called intent-to-treat. Any alteration in the composition of each treatment group in the analysis negates the intention of the randomized trial design—to have a random distribution of unmeasured characteristics that may affect outcome (i.e. confounders). When this happens, the randomized trial, in effect, is converted to an observational trial.4 For example, in a trial comparing reamed to unreamed tibial nailing, it is possible that a patient randomly assigned to receive an unreamed nail ends up requiring reaming. This patient should be analysed as if he/she did not receive the reaming.

Intent-to-treat analyses are the best way to ensure that confounding will not play a role here. The price paid, however, is typically an attenuation of any observed associations between treatment and outcome—any treatment effect found in an intent-to-treat analysis is likely to be a conservative estimate of efficacy.4

Three other study designs, lower on the evidence pyramid, are the cohort study design, the case–control design and the case-series.

Cohort studies

Cohort studies are characterized by:

• Comparing the outcomes of people whose treatment differs “naturally” (i.e. not as the result of random assignment).

• Identifying study participants based on treatment, and then comparing their outcomes.

• The eligible participants’ not having experienced the outcome of interest at the time when treatment groups are defined.5

While the RCT is considered the “gold standard” of all study designs, the cohort study is often referred to as the “gold standard” of observational studies because of its ability to establish a temporal relationship between the treatment and the outcome of interest. In other words, the treatment clearly precedes the outcome. Since other factors than treatment alone (such as prognostic factors) can also influence the outcome, an imbalance between treatment and control groups with respect to these factors may result in a biased outcome. Furthermore, these factors often influence which treatment the patient receives. As a result, cohort studies can lead to misleading results, if these factors are not carefully identified and controlled for, thereby either overestimating or underestimating the treatment effects. Cohort studies may be divided into those that are prospective and those that are retrospective, based on the time of study initiation. Prospective cohort studies involve the ascertainment of treatment status at the outset with follow-up for outcome to occur in the future. Retrospective cohort studies, on the other hand, are characterized by the treatment and outcome having already occurred at the time of study initiation. Even though retrospective cohort studies tend to be cheaper and faster than prospective cohort studies, the retrospective nature of the study can introduce additional bias. Furthermore, retrospective cohort studies are limited to outcomes and prognostic factors that have already been collected, and may not be the factors that are important in answering the clinical question.

Case–control studies

Case–control studies are characterized by:

• Comparing the frequency of past “exposure” between cases that develop the outcome of interest and controls that do not have the outcome.

• Controls are chosen to reflect the frequency of “exposure” in the underlying population to the risk from which the cases arose.

• Study participants are identified based on outcome and then compared for presence of “exposure”.

“Exposure” can refer to a treatment, or any other factor that may influence the outcome, such as fracture severity, degree of osteoporosis and age, to name a few.

The case–control design is an alternative to the cohort design for investigating, or comparing, the effects of a treatment(s) (or risk factors) on an outcome, generally when the outcome of interest is rare. Examples of rare outcomes in musculoskeletal traumatology include pulmonary embolism, implant failure, mortality and others that occur at rate of less than 5% of all treated subjects. A case–control study compares the odds of a past treatment, or a suspected risk factor, between cases (individuals with the outcome of interest) and controls (individuals who are as similar to the cases as possible without the outcome of interest).

Case-series

Case-series are characterized by:

• Collection of multiple noteworthy clinical occurrences.

• Description of an unusual combination of signs and symptoms, experience with a novel treatment or a sequence of events that may suggest previously unsuspected causal relationships.

• Being descriptive studies, unlike the previously described analytical studies, because they are undertaken without a particular hypothesis in mind and lack a comparison group.

• The need for caution in generalising the results to patients in other settings.

Despite being the weakest with respect to providing evidence for treatment superiority, case-series are frequently published in musculoskeletal traumatology.

Blinding

Blinding, or masking, refers to keeping persons involved in a trial (RCT, cohort or case–control study) unaware of which study subjects are in which treatment arm. The main reasons for doing this are:

• to avoid possible influences of this knowledge in assessing the outcome;

• to minimize a differential attrition loss-to-follow-up between treatment groups.

When determining who should be blind, ask these three questions:

• Can I blind the patients? The best way to avoid the placebo effect is to prevent the patient from knowing if he/she received the treatment of interest.

• Can I blind the clinicians? Differences in patient care, other than the intervention (such as rehabilitation care), can bias the results.

• Can I blind those who evaluate the outcomes? If study personnel are privy to the treatment, outcomes assessed by these personnel, such as radiographs or clinical status, may reflect the assessor's bias (conscious or subconscious).

Generally, a trial is double-blind if both the patients and research staff members responsible for measuring outcomes are kept unaware. A trial is single-blind if only one of these parties (usually the subjects) is kept unaware. Blinding may also be extended to people with other roles, such as those performing the statistical analyses of the data. If blinding is not logistically, or ethically, possible, you should, at a minimum, enlist independent (i.e. disinterested) observers to evaluate important outcomes.

Prognostic variables

Prognostic variables are those that may be associated with the outcome, but are not necessarily the treatment interventions being evaluated. These should be discussed up front, especially if you have the desire to explore their association with the outcome. These are especially important for prognostic studies that seek to identify those patients at a greater risk of a poor prognosis. A thorough literature review is the best way to identify what these factors are. Additionally, clinical experience should also contribute to identifying those factors that may have not been identified in the past. Furthermore, prognostic variables may also be potential confounding variables that accentuate the importance of measuring them. A good example is fracture severity, or classification. The more severe fractures tend to be treated differently from the less severe fractures and often lead to worse outcomes, independent of the treatment intervention.

Outcome measures

A perfectly designed study that clearly demonstrates the superiority of one treatment over another may provide insufficient evidence, or even be harmful, if it fails to measure “important” outcomes. Some of the best studies leave us with more questions, because the authors failed to put thought into their outcome selection. For example, while one treatment method may lead to fewer short-term complications, when compared to another, the same method may also result in decreased function, or an inferior quality-of-life. Were these outcomes measured? What is critical to any clinical, or research, setting, with respect to measuring treatment effectiveness, is identifying and measuring clinically “important” outcomes. That which is deemed “important” may lie in the eye of the beholder; however, much thought should go into their selection. The following should be considered when selecting outcomes:

• They should be directly tied to the specific aims and capable of measuring the outcomes of interest.

• They should be important to patients.

• Patient-reported outcomes should be considered.

Emerging patient-reported outcome (PRO) measures are doing a better job of measuring aspects of patients’ lives that they consider important. Furthermore, they are generally more carefully developed and tested. Generally, PROs are questionnaires, or instruments, that patients complete by themselves, or, when necessary, are completed by others on their behalf, to obtain information in relation to functional ability, symptoms, health status, health-related quality-of-life and results of specific treatment strategies. It is increasingly recognized that traditional clinician-based outcome measures need to be complemented by measures that focus on the patient's concerns, in order to evaluate interventions and identify whether one treatment is better than another.7 Interest in PROs has been fuelled by an increased importance of chronic conditions, where the objectives of treatment are to restore, or improve, function, while preventing future functional decline.1

There is now available a large array of such instruments for musculoskeletal conditions. For a thorough discussion on the selection of appropriate outcomes and an evaluation of more than 150 musculoskeletal outcomes instruments, cited in the literature, see the AO Handbook. Musculoskeletal Outcomes Measures and Instruments.8

Complete follow-up

It is important to develop a subject follow-up plan that minimizes losses to follow-up. It is not uncommon for the results of a study to be reported, by utilizing only a proportion of the subjects who entered a study. A high rate of follow-up (e.g. >90%) will help to avoid a bias that can arise as a result of an association between factors determining dropping out and the outcome. For example, 12-months after surgery for a tibial pilon fracture, patients who are having excessive pain, or difficulty with function or activities of daily living may be more likely to present for follow-up than patients who are doing well. If one treatment method is superior to another, and the follow-up rate is low (e.g. 60%), this treatment method may appear inferior to another method, if only those with poor outcomes attend for follow-up. The following are some strategies to improve your follow-up rate:

• Upon study entry, you should obtain the following:

Patient's mailing address, telephone number and e-mail address.

The name and address of the patient's primary care physician.

The name, address and phone number of three people at different addresses, with whom the patient does not live, who are likely to be aware of the patient's location.

• You should call patients and remind them of upcoming study visits from the study coordinator.

• Study personnel should contact patients no less frequently than once every three months to maintain contact and be aware of change in residence.

Statistical analysis

There should be a description in your protocol for how you handle descriptive and analytical statistics. The presentation of descriptive data on the study population is important for a number of reasons:

• It enables you to determine the comparability of study groups at baseline and to evaluate the likelihood of any selection bias, or confounding.

• Descriptive tables presented typically describe all enrolled patients. This can allow the reader to determine, when not explicitly stated, the extent of loss to follow-up.

• The baseline characteristics of the study population can help in determining the generalisability of the results to your own study population.

The purpose of analytical statistics is to report the effects of treatment and the risk factors for specific outcomes. These rely on the testing of statistical hypotheses. The testing of a statistical hypothesis (sometimes called testing of statistical significance) will be an important application in your clinical study. Statistical tests aim to distinguish true differences (associations) from chance. It is worth going back to the basics and revisiting The Scientific Method which serves as the foundation for all research.

The sequence of events outlined by The Scientific Method is the following:

• Start with an idea or question.

• Develop a testable hypothesis.

• Specify a null hypothesis.

• Reject (or fail to reject) the null hypothesis.

• Repeat the experiment.

A classical example is the rolling of dice. The null hypothesis is that the dice are fair. If it turns out that the dice are not fair, then this is an extremely rare event, or the die is not balanced. When the null hypothesis is false, the research hypothesis and the researcher's “hunch” may be correct. When the null hypothesis is true, the research hypothesis is false and the researcher's “hunch” was wrong. For example, you may hypothesize that surgical treatment of distal radius fractures in elderly women is more effective than conservative management. The null hypothesis is that there is no difference between these treatment methods. The research hypothesis is that surgical management provides better patient outcomes among elderly women than conservative management. Let us assume that the truth (for this discussion) is that there is no difference between these two methods. In this case, the null hypothesis would be correct. If your data lead to the conclusion that surgical management is more effective, then a true null hypothesis could be rejected and a Type I error could occur. The P-value can help avoid this mistake. If, however, the null hypothesis is incorrect and your data lead to acceptance of the false null hypothesis, then, unfortunately, it may appear that surgical management is no more effective. Accepting a false null hypothesis is a Type II error. The power analysis should help avoid this mistake. Type II errors occur when the null hypothesis is wrong, but we fail to make that conclusion based on limitations in our data set. A common explanation is a sample of subjects that is too small. Failure to achieve statistical significance, when comparing two groups, is more likely to be due to inadequate power than there being no difference. This is why a power analysis is so important in the study planning process. The power analysis considers the number of subjects needed, the differences to be detected, a specified P-value and the variability in the data. The power gives us some degree of assurance (80%, 90%, 95%, etc.) that, if there is no statistical difference found, the conditions of the study design were appropriate to detect one, if there was one to detect. By convention, power is usually set at 80% or 0.80. This means that 80% of the time it is correct to say “no difference”, and 20% of the time incorrect to say “no difference”. Depending on the importance of avoiding Type II errors, the power may be set much higher. It is important to note that you do not have to know how to do a power analysis or a sample size estimate to know when one needs to be done. It is advisable to have an epidemiologist, or statistician, on whom you can count for this aspect of the study planning.

By convention, P-values of 0.05, or less, are accepted as statistically significant. The following are important characteristics of P-values:

• They help to determine the probability that the conclusions reached are due to chance alone.

• They are mathematical representations of the probability that the researcher is wrong if the null hypothesis is rejected.

• They are a probability estimate of the possibility that the null hypothesis is false.

• They are never a clear yes or no—merely a guide to action.

There is nothing magical about 0.05. A significance level of 0.05 means that there is a 1-in-20 chance of being wrong.

The use of effect measures, such as relative risks (RR), odds ratios (OR), relative risk reductions (RRR), number needed to treat (NNT) and their corresponding confidence intervals, can provide more useful information than a P-value. A much more thorough description of these useful tools can be found in the “Clinical Studies” section of the AO Foundation website: http://www.aofoundation.org/wps/portal/Home. Table 6 is a checklist that you may use when writing your protocol, or evaluating a therapeutic study that has already been published.

Table 6.

Methodological principles applied to the evaluation of therapeutic studies Principle

Statement of concealed allocationa

Intention to treat principlea

Independent blind assessment

Patient-reported outcomes

Complete follow-up of >90%

Adequate sample size

Appropriate analysis and use of effect measures

Controlling for possible confounding

Inclusion and exclusion criteria

a Evaluated in randomized controlled trials only.

Step 3: implementing the study plan

Whether you are conducting a small study at your local institution, or a large international multi-site trial that will require oversight by the United States (US) Food and Drug Administration (FDA) (http://www.fda.gov/oc/gcp/default.htm) and/or European Union (EU) (http://europa.eu.int/pol/rd/index_en.htm), it is prudent to get into the habit of following Good Clinical Practice (GCP) procedures. GCP is an international, ethical and scientific quality standard for designing, conducting, recording and reporting trials that involve the participation of human subjects. Compliance with this standard provides public assurance that the rights, safety and well-being of trial subjects are protected (consistent with principles that have their origin in the Declaration of Helsinki) and that the clinical trial data are credible. A discussion on adhering to Good Clinical Procedures, including developing the study operations manual, site initiation visits, recruiting and enrolling subjects, entering data and correcting errors, study site visits, handling missing data and subject withdrawals and final study closure, will be reserved for another publication. The key is to be organized, establish a process and adhere to it, pursue follow-up aggressively and be willing to modify, or enlarge, a study, if you see potential problems. It is better to address these issues early, than to wait and have a reviewer of your manuscript identify them.

Step 4: reporting the results

Once you have developed your study idea, developed your study plan, and executed your study successfully, you can begin to discover the truth behind your study questions. Is treatment A better than treatment B? Does it depend on what group of patients received the treatment (e.g. young versus old)? Are patients really better off receiving this new implant or surgical technique? Is it possible that the old way is the best way? Is it possible that it does not matter which technique is used?

Finding these answers is the motivating force behind performing a clinical study. Although it depends on the standards of the journal to which you choose to submit your manuscript, a general outline that you can follow for writing your manuscript may include the following sections: introduction, methods, results, discussion and conclusion.

In general, the introduction motivates the purpose of the research, outlines the objectives and why they are important, and why your hypothesis makes clinical sense.

The methods should provide sufficient detail for a reader to be able to reproduce your study. It is very important to discuss your statistical methods and to demonstrate that you did a power analysis.

When reporting results, it is important to report the actual data, not just P-values, so that the reader can differentiate between statistical and clinical significance. Furthermore, reporting effect measures (e.g. RRs, RRRs and NNT), when appropriate, makes the findings more clinically useful. Concise tables and graphs are good supplements to the text and may be a more efficient way to report some of your data.

The discussion section allows you to describe the significance of your results, contrasting statistical and clinical significance. It also allows you to discuss your strengths and weaknesses (Table 7). It is better to be honest than to have a reviewer, or reader, point out issues that you had neglected to address. Finally, be careful not to use this section as a platform for clinical opinion. It is very refreshing to read a discussion that is clear and concise and that stays within the boundaries of the study being reported.

Table 7.

Guidelines to consider when writing the discussion section of your manuscript Guideline Complete

Discuss the implications of the primary analyses first √

Distinguish between statistical and clinical significance √

Discuss any weaknesses and strengths in your research design, or problems with data collection, analysis or interpretation √

Discuss the results in the context of the published literature √

Discuss the generalisability of the results √

The conclusion allows you very briefly to summarize the principal findings of your study. Limit your conclusions only to those supported by the results of your study. Unsupported conclusions are very common in scientific research. Consider the guidelines outlined in Table 8.6

Table 8.

Guidelines to consider when writing the conclusion section of your manuscript Guideline Complete

You should provide equal emphasis on positive and negative findings √

Results of secondary or post hoc analyses should be presented as explanatory √

Conclusions should be based on fact and logic, not supposition or speculation √

Studies using surrogate endpoints (e.g. muscle strength, range of motion, perhaps even bony union) should be interpreted with caution. In other words, just because a patient has good shoulder strength and range of motion does not necessarily mean they have a good final outcome if they cannot perform activities of daily living √

Step 5: submitting for publication

Before submitting for your manuscript publication, it is important to have your peers review it. In fact, it is a good idea to have a number of people review it, as you develop it. For example, you may want to have your methods section reviewed before you write up the results. Changes in your methods section will undoubtedly affect the way you report the results. Expect this process to be lengthy. Time spent having your colleagues review your paper is time saved when you submit it to a journal. It may even be the difference between acceptance and rejection. If you are asking colleagues to review your manuscripts (whether they are co-authors or not), be sure to be prepared to return the favour.

Theoretically, one of the following three things will be likely to happen when you submit your manuscript to a journal:

• rejection;

• revision request:

acceptance implied,

acceptance possible;

• acceptance.

The following are some important principles that you should consider when submitting your manuscript for publication3:

• Select a journal that is most appropriate for the audience you want to reach.

• Consider writing to the editor of one or more journals to determine whether or not they are interested in publishing your topic, especially if it is unique.

• Make sure that you adhere strictly to the selected journal's guidelines for formatting and submission.

• Ensure that your paper is statistically sound. Most editors take a close look at the statistical plan and power analysis.

• If rejected, do not be discouraged. There are plenty of other journals out there.

• Whether rejected, or revision is requested, read the reviewers’ comments carefully and unemotionally.

• If you make the requested revisions, your paper has a high likelihood of being accepted. Make sure that you respond to the reviewers’ comments in a timely and organized fashion.

• Do not be afraid respectfully to argue your case, if you feel that you have been misinterpreted, or misunderstood; however, be careful with this. Comments and criticisms are generally informed and should be considered seriously. Debate may make acceptance less predictable.

• Persevere and be patient.

Conclusions

There are five major steps that one must navigate successfully to take a study idea and turn it into a publication that may have an impact on clinical practice. These steps include developing the study question(s), developing the study plan, implementing the study plan, reporting the results and submitting the manuscript(s) for publication. Each step is a process in itself and should be treated as such. Patience is a virtue in clinical research, but when practised will lead to significant contributions and improvements in the area of patient care.

References

1 M. Byrne, Cancer chemotherapy and quality of life, BMJ 304 (1992), pp. 1523–1524. Abstract-MEDLINE | Abstract-EMBASE

3 P. Cummings and F.P. Rivara, Responding to reviewers’ comments on submitted articles, Arch Pediatr Adolesc Med 156 (2002), pp. 105–107. Abstract-MEDLINE | Abstract-EMBASE

4 T.D. Koepsell and N.S. Weiss, Randomized trials, Epidemiologic methods: studying the occurrence of illness, Oxford University Press, Inc., New York (2003) p. 308–45.

5 T.D. Koepsell and N.S. Weiss, Cohort studies, Epidemiologic methods: studying the occurrence of illness, Oxford University Press, Inc., New York (2003) p. 346–73.

6 T.A. Lang and M. Secic, How to report statistics in medicine, American College of Physicians, Philadelphia, PA (1997) p. 367.

7 M.L. Slevin, H. Plant and D. Lynch et al., Who should measure quality of life, the doctor or the patient?, Br J Cancer 57 (1988), pp. 109–112. Abstract-EMBASE | Abstract-MEDLINE

8 M. Suk, B.P. Hanson, D.C. Norvell and D.L. Helfet, AO handbook. Musculoskeletal outcomes measures and instruments, AO Publishing, Davos, Switzerland (2004) p. 444

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...