Treatment Planning Systems

Algorithms, Commissioning and Quality Assurance

Treatment Planning System Basics

What is a treatment planning system?

A Treatment Planning System (TPS) is a computer system used to determine optimal beam arrangements, energies, field sizes, and ultimately fluence pattern to produce a safe and effective dose distribution.

Moder treatment planning systems are divided into three main components:

Beam model

The beam model is a computerized representation of a beam, defined by energy distribution, machine specific geometry, and beam modifiers such as MLC, flattening filters, and wedges.

Dose Calculation Engine

The dose calculation engine is responsible for applying the beam model to a given patient/phantom geometry and accurately calculating dose.

Beam models range from extremely simple representations to full simulations of individual particles traveling from the treatment head to the region of interest. Selection of calculation engine is often a trade-off between increased accuracy and increased computation time.

Optimization Engine

The optimization engine is determine the optimal arrangement of fields and field modifiers to produce the treatment plan.

In classical 3D treatment planning, the human treatment planner serves as the optimization engine manually manipulating variables to achieve a reasonable plan. IMRT (intensity modulated radiotherapy) and VMAT (volumetrically modulated arc therapy) use a computerized optimization engine and inverse planning to produce complex field arrangements.

Calculating Dose to Medium

Absorbed dose calculation depends upon the material the radiation is incident upon and the material between the source and the region of interest.  There are 3 common dose calculations:

Dose-to-water with radiation transport in water ()

Historically, most dose computations algorithms (pencil beam, collapsed cone, etc) have computed as this is the dose directly measuring during TPS commissioning and treatment machine calibration.

 Dose-to-medium with radiation transport in medium ()

This is inherently the most accurate dose calculation option but is difficult to implement without Monte Carlo.

Dose-to-water with radiation transport in medium ()

Monte Carlo calculations, which typically compute , can be converted to a dose to water, by multiplication with the ratio of unrestricted mass collision stopping power between the medium and water.

is often reported in Monte Carlo simulations because dose to water has been the historical unit used in evaluating treatment efficacy.

Can we trust Dose-to-Water with radiation transport in medium?

All three dose calculation methods yield similar calculation in soft tissue but for bone, significant dose differences are found.

There is some evidence that may actually be closer to than . For this reason, and because has been used in determining historical tissue tolerances, AAPM MGGP5.a recommends  over .

Reference: Ma, C-M and Li J, Dose specification of radiation therapy: dose to water or dose to medium? Phys. Med. Biol. 56 (2011) 9073-3089.

Comparison of collapsed cone (CC) dose to water with water radiation transport with Monte Carlo (MC) dose to medium with medium radiation transport and dose to water with medium transport. Note that CC dose to water is closer than MC dose to water to MC dose to medium. Image credit: RayStation.

Which dose to use?

In soft tissue targets, the dose differences are small (1-2%).
In bone, however, calculation methods may vary by as much as 12-15%.

Key Point: Today, essentially all clinical implementations calculate dose-to-water both because it is directly traceable to calibration (TG-51) but also because the majority of radiation oncology efficacy data has been gathered using this convention.

Advantages of Dose-to-Water

  • Most clinical experience has historically used dose-to-water.
  • Treatment units are calibrated using water, providing a direct link between calibration and calculation.
  • Clinical QA is performed in phantoms which are water like (e.g. acrylic, solid water)
  • Even for tumor cells in bone, the cells may be chemically more similar to water than to bone.

Advantages of Dose-to-Medium

  • is the most accurate representation of absorbed dose to the tissue of interest.
  • For Monte Carlo, converting to may introduce additional error.
  • Most treatment sites are in soft tissue, where the difference between  and is small (1-2%). Therefore, historical clinical experience may remain useful.

Dose Calculation Algorithms

Dose Calculation Algorithm Comparison

MethodAdvantagesDisadvantages
Pencil Beam
  • Very fast
  • Accurate in homogeneous medium
  • Inaccurate in heterogeneous medium

Convolution / Superposition
  • Clinically acceptable accuracy in heterogeneous medium
  • Fast enough for clinical use
  • Not as accurate as Monte Carlo
  • Inaccurate near junctions of very different density materials
Discrete Ordinates / Boltzmann Transport
  • Directly calculates macroscopic behavior of radiation traveling through matter
  • No statistical noise
  • Accuracy near that of Monte Carlo
  • Energy and angle discretization result in loss of accuracy
Monte Carlo
  • Most correctly predicts dose to inhomgeneities
  • Most accurately models physics of interactions
  • Computationally expensive
  • Simplifications must be made to reduce calculation times for clinical use
  • Subject to statistical noise if insufficient number of particles are simulated

Kernel Based Algorithms

Kernel based algorithms utilize a kernel  and ray tracing to model dose deposition from an interaction at a given point. Dose may be calculated by summing  and scaling kernels according to the energy fluence at all points. Simple kernel based models, such as pencil beam, may use a line kernel which together with ray tracing and knowledge of beam fluence produces very fast dose computations. More complex kernel based models, such as convolution/superposition, are slower but yield a more accurate result by correcting for density variations and beam divergence.

What is a Kernel?

  • A Kernel represents energy the energy spread resulting from an interaction at a given point or line
    • The energy spreads because charged particles and scatter photons carry energy away the site of primary interaction
  • Kernels are pre-calculated using complex Monte Carlo simulations
  • Both line and point kernel are radially symmetric

What is Ray Tracing?

Ray tracing algorithms are used in kernel based dose calculation algorithms to transport energy from the radiation source through the patient/phantom data set.

  • Steps in ray tracing
    1. A ray is generated originating at the radiation source, projecting through the aperture (collimators, MLC, blocks, etc), and into the patient data set.
    2. The points of intersection between a ray and the boundary of a voxel are identified.
      • These points of intersection are circled in blue in the image at right.
    3. For each voxel, the distance between the two points of intersection between the ray and the voxel is computed.
    4. Distance determined in step 3 is used to scale the fluence though the voxel as a result of the ray.
      • E.g. Since the distance A-B is greater than distance C-D, voxel 2 have more fluence resulting from this ray than voxel 6.
  • Proper ray sampling is important to balance calculation accuracy against computation time.
    • In the example at right with only two rays voxels 8 and 11 would be assumed to have no fluence!
Illustration of ray tracing algorithm.

Kernel Based Algorithms: Pencil Beam

Pencil beam dose computation is the simplest and fastest kernel based dose computation method. Pencil beam dose calculations typically use a line kernel to model dose deposition and often neglect modifying dose deposition based on medium density. This makes pencil beam algorithms a good choice for very heterogeneous treatment sites such as the lungs.

Steps in computing pencil beam dose

  1. The aperture is projected through the patient/phantom and is subdivided into small beamlets.
  2.  Pencil kernels are applied to each beamlet giving a beamlet specific dose distribution.
  3. These dose distributions are then simply summed resulting in the total dose map.

Kernel Based Algorithms: Convolution / Superposition Algorithms

Convolution/superposition algorithms use one or more point Kernels rather than a line Kernel. This allows the kernel to be scaled according to local and surrounding density. Additionally, multiple point kernels are typically available representing dose deposition of different energy ranges in the clinical beam. This allows the algorithm to accurately model beam hardening as the beam traverses the medium. Advanced algorithms will include a kernel tilting component which shifts the orientation of the kernel to match the diverging primary beam.

To compute dose, the Kernal is convolved with TERMA (Total Energy Released per unit MAss) yielding absorbed dose. Convolution, ⊗, is a mathematical operation used to combine functions.

Steps in computing convolution/superposition dose

  1. Model primary photons incident on the phantom/patient.
    • A finite source size and location is determined in commissioning.
    • Mask function is a mathematical expression defining the outlines of collimators and MLC/block position.
      • The mask function reduces transmission of in the blocked areas by a factor determined in commissioning.
    • Aperture function is used to model the penumbra blurring the results from collimator and MLC positions.
      • Aperture functions shape is that of a normal function defined by the finite source size and location. This allows the aperture function to account for magnification effects.
    • Extrafocal radiation is modeled by the addition of broad normally distributed fluence map to the fluence map produced by the primary source, mask function, and aperture function.
      • Extrafocal radiation is also produced by Compton scattering in the flattening filter and, to a small degree, in other treatment head components. This causes fluence outside the mask to be higher than would be accounted for by attenuated primary photons alone.
  2. Ray-tracing projects fluence through the phantom/patient.
  3. TERMA is calculated from fluence and mass attenuation coefficient.
    • Ray attenuation is based on voxel density and the energy fluence spectrum.
      • Energy fluence spectrum changes as the beam hardens with depth.
      • It is common to use a fluence attenuation table (FAT), which maps attenuation as a function of depth and density, for this purpose.
  4. Dose is computed by convolution of TERMA and the Kernel.
    •  D = TERMA⊗K
    • Kernel Tilting
      • The orientation of the Kernel, which is radially symmetric, should ideally align with the vector of the photon interaction with the medium.
      • To accomplish this the Kernel may be tilted – its axis reoriented in space – to align with the beam’s divergence.
      • Kernel tilting has only been shown to result in a minor improvement in calculation accuracy.​1​
    • Superposition
      • Phantom heterogeneities greatly influence scatter, dose, and, by extension, Kernel. For a perfect calculation, each voxel would require its own Kernel based on the density and material of itself and its surrounding voxels.
      • Superposition solves this problem by modifying the Kernel by scaling by the radiologic distance.
        • Radiologic distance = (ρr – ρr’ ) · (r-r’)
          • is the location of photon interaction
          • r’ is the location of dose deposition
          • ρ is physical density
    • There are additionally several techniques for applying the convolution.
      • Direct Summation
        • Directly applies the kernel to TERMA in each dose calculation
        • Very computationally expensive but allows superposition
          • Scales as N6 where N is the number of voxels!
      • Fast Fourier Transforms (FFT)
        • Uses Fast Fourier Tranform, FFT(f ⊗ g) = FFT(f) · FFT(g), to simplify the convolution process and reduce calculation times.
          • Scales as N3log(N) where N is the number of voxels.
        • Cannot use superposition making FFT less accurate.
      • Collapsed Cone Convolution (CCC)
        • Speeds up calculation reducing the point Kernel from a full 3D object to a point and a finite number of rays projecting away from the primary interaction site.
          • Consider the Kernel to be a soccer ball with each shell composed of patches. If these patches are projected to the center of the ball, each forms a cone. CCC turns each of these cones into a single ray which may be ray traced to deposit dose.
  5. Electron contamination dose is added to dose distribution.
    • For MV beams, most surface dose arises from electrons scattered in the treatment head.
    • The spectrum of contaminant electrons agrees approximately with that of an electron beam with a practical range slightly greater than depth of maximum dose for the photon beam.

Boltzmann Transport Dose Computation

  • The Boltzmann Transport Equation describes the behavior of radiation traveling through mater at a macroscopic level.
    • Ψ is energy fluence
    • σt is total interaction cross section
    • Ω is particle energy fluence vector (direction)
    • Q is initial energy
  • May be directly solved (e.g. Eclipse Acuros) or solved by simulation (i.e. Monte Carlo)
  • Assumptions
    • Radiation particles interact only with medium, not with each other.
    • The number of particles emitted from the source is equal to the number of particles transported plus the number absorbed.

Steps in Computing Boltzmann Transport Equation Dose

  1. Ray tracing transports primary fluence from source through patient/phantom.
  2. Calculate scattered photon fluence through patient/phantom.
  3. Calculate scattered electron fluence though patient/phantom.
  4. Compute dose.

Monte Carlo Dose Computation

Monte Carlo (MC) is a method of finding numerical solutions to a problem by random simulation. MC may be used to compute dose distributions by simulating the interactions of a large number of particles (photons, electrons, protons, etc) as they travel through a medium. Further, random noise is inherent in the MC method requiring about 104 histories (simulated particle interactions) per voxel to achieve ±1% calculation accuracy. Because MC simulates large numbers of interactions at an atomic level, it is both the most accurate and most computationally intensive method of dose calculation.

Steps in computing Monte Carlo Dose

  1. A particle is created by simulation traveling along a vector determined by random weighted probability.
  2. The distance to the particle’s next interaction is randomly assigned based on the linear attenuation coefficient of the material the particle passes through.
  3. Ray tracing transports the particle to the interaction site.
  4. The type of interaction site and determine the type of interaction taking place based on known interaction probabilities.
  5. Simulate the interaction which may involve energy deposit, scattering, release of additional particles to be tracked, etc.
  6. Steps 1-5 repeat until each the particle’s energy is below a threshold energy. At that point the remaining energy is deposited locally as dose.
    • Threshold energy impacts both the speed and accuracy of calculation with higher thresholds speeding calculation at the expense of reduced accuracy.

Techniques for accelerating Monte Carlo calculations

Variance reduction

  • Variance reduction techniques simplify the physics modeled in the MC simulation.
  • Bremsstrahlung Splitting is a common method of variance reduction.
    • Rather than randomly determining a single direction and energy for a Bremsstrahlung photon, the photon may be “split” into several photons of lower “weights” (i.e. the reduces weight photon will deliver less dose than its energy would entail but travel the same distance). This reduces the statistical noise and thereby reduces the total number of primary photons that must me simulated.

Condensed histories

  • Groups and simplifies interactions that don’t appreciably change the energy or direction of the particle.
  • Divides interactions into “hard” and “soft” collisions.
    • Hard collisions are those which have a large impact on particle energy and direction. Hard collisions are treated normally.
    • Soft collisions of those with little impact on the energy and direction. All soft collisions bounded between hard collisions are treated as a group with the hard collision. Soft collisions may also be grouped and treated at regular intervals rather than only during hard collision events.

Russian Roulette

  • Rejects particles most particles unlikely to play a role in final dose.
    • E.g. secondary electrons created in treatment head.
  • Retains a fraction of these particles, hence the name “Russian Roulette.”

Range Rejection

  • When an electron has low enough energy that it cannot reach another voxel, its energy is assumed to be deposited locally.
  • This overestimates dose as it ignores potential low energy Bremsstrahlung photon production.

Phase Space Files

  • A full Monte Carlo simulation would take electrons emerging from the bending magnets and model their interactions through the treatment head (target, flattening filter, scattering foil, jaws, MLC, etc) and the patient/phantom.
  • To save computation time, treatment head interactions may be pre-calculated and stored in a phase space file.
    • Phase space files tend to be quite large making model usage a potential bottle neck.
    • This bottle neck can be avoided either by simplifying the interactions modeled in the file (i.e. use variance reduction techniques within the phase space file) or by using the phase space file to create a virtual source.
  • Phase space files may also be created by physical beam measurements to create a virtual source exiting the treatment head.

Plan Optimization Algorithms

Treatment planning involves a balancing act between competing clinical goals. For example, a target (tumor) should ideally receive a full prescription dose while the surrounding healthy organs should receive as close to zero dose as is achievable. The goal of plan optimization, then, is to find an optimal trade-off between inherently conflicting clinical goals given the set of treatment delivery limitations.

Optimization Parameters

Cost function

Cost functions, also known as objective functions, quantify the degree to which a treatment plan meets its competing objectives. The goal of all optimizers is to minimize the cost function by modifying the dose distribution within the constraints of clinical deliverability.

Cost function

Dose-Volume Histogram (DVH) Constraints

  • dmax refers to the maximum dose.
  • dmin refers to the minimum dose.
  • Volume at dose: V(x)[dose unit] [<, >, =] y[volume unit]
    • Constrains the volume of a structure receiving a given dose.
    • Example: V20Gy<30% indicates that the volume receiving 20Gy should be less than 30% of the structure’s total volume.
      • x[Dose unit] indicates the amount of dose to quantify volume against. Units typically Gy or % of prescription dose.
      • [<, >, =] indicates the direction of the constraint.
      • y[volume unit] indicates the volume being constrained at the given dose level. Units typically cc or % of structure volume.
  • Dose at volume: D(y)[volume unit] [<, >, =] (x)[dose unit]
    • Constrains the dose that a given volume of a structure may receive.
    • Example: D95% >50.4Gy indicates that 95% of the structure’s total volume should receive more than 50.4Gy.
Dose volume histogram indicates dose at volume and volume at dose locations.

Equivalent Uniform Dose (EUD) Constraints

Radiobiology distinguishes between parallel and serial organs. Equivalent Uniform Dose allows a user to directly constrain the dose distribution according to a specified dose, d, and the degree to which the organ in is serial or parallel, α.

What should α value to use

  • Parallel organs should be given an α of 1.
    • Note that in this case, EUD reduced to mean dose.
  • Serial organs may be given an α of 10.
    • This approximates a maximum dose constraint.
  • Targets may also be constrained with EUD by using negative values of  α.

Common Optimization Algorithms

 Fluence Map Optimization (FMO)Direct Machine Parameter Optimization (DMPO)Multi-Criteria (Pareto) Optimization (MCO)
OverviewOptimizes the fluence map then attempts to achieve this fluence pattern using real machine parameters.The optimizer directly acts on segment parameters such as MLC leaf position and segment weight to yield the best achievable dose distribution.The optimizer generates a series of Pareto optimal plans and allows the user to choose between them. Pareto optimal plans are those plan where a given parameter cannot be improved without hurting another parameter.
Advantages
  • Fast
  • All optimal solutions are globally optimal
  • Optimized solution is machine deliverable
  • Simplifies the planning process
  • Allows user to directly visualize trade-offs among the best plans
Disadvantages
  • Final dose distribution is often sub-optimal due to machine constraints
  • More complicated to solve than FMO
  • Local optima may not be the best solution
  • Very computationally expensive
  • Requires knowledge of optimal characteristics prior to planning

Fluence Map Optimization (FMO)

Fluence Map Optimization (FMO) uses gradient based methods (1st and 2nd derivatives) to arrive at an optimal fluence distribution. Once the optimal fluence distribution is found, leaf-sequencing is used to find a set of machine deliverable apertures which approximate this ideal fluence map. Note, the actual deliverable fluence map will not match the ideally calculated fluence map due to machine limitations.

Advantages of FMO

  • FMO is computationally inexpensive (fast) compared to Direct Aperture Optimization.
  • A global optimum fluence map can always be found because the cost function with associated delivery constraints is convex.
    • I.e. For convex functions, a local minimizer is a global minimizer.

Disadvantages of FMO

  • Because machine parameters are not directly optimized, the final dose distribution after leaf-sequencing is unlikely to represent the best possible dose distribution the machine could produce.

Direct Machine Parameter Optimization (DMPO)

Direct Machine Parameter Optimization (DMPO), sometimes referred to as Direct Aperture Optimization, is a plan optimization method that directly considers leaf position and segment weights as variables during optimization. VMAT implementations of DMPO also consider other machine parameters such as gantry, couch, and collimator limitations. DMPO produces a plan that has been optimized with actual machine parameters in mind yielding a more optimal final dose distribution than FMO.

The additional machine limitations constrain the possible optimizer solutions resulting in an optimization problem that is nonlinear and nonconvex. This means that there are locally optimal solutions which are not globally optimal. As a result, gradient methods like those used in FMO are less likely to find the absolute best solution. Put another way; the optimizer can get "stuck" in a poor solution because all small variations on the machine parameters yield a worse plan even when large changes to the machine parameters might yield a very good plan.

Approaches to solving nonconvex optimization problems

Simulated Annealing

  • Periodically introduces random changes to the variables. The optimizer evaluates the random changes and only retains those which improve the cost function.
  • This allows the optimizer to escape from local minima in nonconvex problems.
  • The idea of simulated annealing is taken from metallurgy where annealing, by periodically adding heat (randomness), allows the metal to cool into harder (lower energy) states.

Column generation methods

  • Divides the optimization problem into subproblems, such as an individual aperture position, and attempts to optimize them. The optimizer then works on the global problem by optimizing segment weights.

Gradient-based methods

  • Uses the derivative and second derivative of a problem to determine the direction of optimization.

Genetic methods

  • Creates a population of possible solutions and uses “survival of the fittest” to create increasingly optimal solutions.

Multi-Criteria Optimization (MCO)

Traditional plan optimization – FMO and DMPO – assigns manual weights DVH objectives in an attempt to yield the optimal plan. These DVH objectives and weights are, unfortunately, often not directly associated with the desired clinical goal. This means that traditional optimization is a time consuming trial-and-error process. Multi-criteria optimization attempts to resolve these issues by tying clinical desires more directly to optimization.

Prioritized Optimization

  • Clinical objectives are determined and ranked in the optimizer.
    • Example rankings
      1. Spinal Cord: Dmax < 45Gy
      2. PTV: D95% = 70Gy
      3. Larynx: V50Gy < 27%
  • The optimizer attempts to achieve the ranked clinical goals in such a way that no lower ranked objective is met at the expense of a higher ranked priority.
  • This is a good solution assuming that the user knows in advance what trade-offs he/she is willing to make.
  • Prioritized optimization’s weakness is that it will ignore even large benefits to a lower ranked objective even if the cost to a higher ranked objective is minimal and the trade-off may be desirable.

Pareto-Optimality

A plan is Pareto-optimal when it is not possible to improve the plan with respect to one objective without worsening it with respect to another objective. That is, when the plan is as good as it can be without making a trade-off.

Pareto-surface is the set of all Pareto-optimal plans. A Pareto-surface is generated by optimizing, usually by Direct Machine Parameter Optimization, a set of Pareto-optimal plans. These plans are then interpolated providing the set of plans along the Pareto-surface. The clinically optimal plan is assumed to fall somewhere along the Pareto-surface. Practically then, MCO resolves around producing and navigating the set of plans on the pareto-surface.

Viewing the trade-offs between Pareto-optimal plans is referred to as “navigating the Pareto surface.” The two most common ways to navigate the Pareto-surface are the library method and the graphic user interface (slider) method.

The library method presents a small sub-set of Pareto optimal plans. The user may choose a plan for delivery or select several plans with desirable characteristics. The selected plans may be interpolated along the Pareto surface to generate additional treatment options.

The graphic user interface method presents the user with trade-off sliders. This allows the user directly to explore the planning trade-offs to be made in Pareto optimal plans.

Illustration of the Pareto-surface used in MCO optimization.

Commissioning and Quality Assurance

Selected Readings

AAPM Medical Physics Practice Guideline 5.a: Commissioning and QA of Treatment Planning Dose Calculations - Megavoltage Photon and Electron Beams (External Link)

AAPM MPPG 5 provides the minimum acceptable standards for commissioning and quality assurance of megavoltage photon and electron treatment planning dose calculation software.

AAPM TG-53: Quality Assurance for Clinical Radiotherapy Treatment Planning (External Link)

AAPM TG-53 reports on quality assurance for treatment planning process as well as QA and commissioning of the treatment planning system (TPS).

AAPM TG-106: Accelerator Beam Data Commissioning Equipment and Procedures (External Link)

AAPM TG-106 provides guidance both on accelerator commissioning as well as best practices and minimal required data for TPS commissioning.

AAPM TG-119: IMRT Commissioning (Extrenal Link)

AAPM TG-119 provides confidence limits and baseline performance expectations for IMRT based on a multi-institutional comparison of IMRT planning and dosimetry.

AAPM TG-218: Tolerance Limits and Methodologies for IMRT Measurement-Based Quality Assurance (External Link)

AAPM TG-218 analyses the use of various measurement schemas, DTA and gamma-index, to assess agreement between TPS computed dose and delivered dose.

Key Point: Acceptance Testing vs Commissioning

Acceptance testing is performed with the vendor and physicists to assure that hardware and software meet certain predefined expectations.

Commissioning takes place after acceptance of the machine and involves significantly more detailed characterization of the performance of a specific machine and treatment planning system.

Basic Aspects of TPS Commissioning (AAPM TG-53)

AAPM TG-53 provides high level guidance on what aspects of a TPS system must be commissioned. Generally, this guidance is broken down into dosimetric and non-dosimetric testing procesured.

Non-dosimetric TPS Commissioning

Accurate image interpretation

  • Spatially accurate with respect to distance and orientation
  • Accurately converts CT number to physical or electron density
  • Accurate representation of machine parameters

Beam energy spectrum and profiles are correctly modeled

  • Beam modifiers such are collimation, MLC, wedges, and applicators are correctly modeled
  • TPS is able to accurately compute the impact of immobilization devices and bolus
  • Accurate presentation of anatomical and dosimetric calculations

Accurately localizing and computing contour areas

  • Producing accurate anatomical dose volume histograms (DVH)

Dosimetric TPS Commissioning

TPS ability to recreate input data

  • When TPS is used to compute the same geometry data used in commissioning, it produces an accurate recreation

Algorithm verification

  • Algorithm verification refers to the process of determining that the algorithm operates as intended. This does not need to be done at every clinic because it is a test of the algorithm's function rather than specific implementation.

Calculation verification

  • Calculation verification means testing that the TPS accurately computes the dose distribution of a given plan delivered by a given machine for a variety of circumstances including
    • Irregular field shapes
    • Heterogeneous materials (i.e. lung, bone)
    • Differing SSDs
    • Wedges
    • MLC shaped fields
    • Dynamically shaped fields (i.e. Sliding window IMRT, VMAT)

Evaluation of limiting cases

  • Determining what limiting cases should be tested requires both understanding the TPS weaknesses and outlining the use scenarios for a given application.
  • Limiting cases include
    • Large and small SSD
    • Large and small fields
    • Situations where the algorithm is likely to be weak

End-to-end testing

  • End-to-end testing takes a phantom from simulation to dose calculation and measured the final delivered dose to assure integrity through the entire treatment process.
  • End-to-end testing often makes use of an anthropomorphic phantom which simulates human geometry.

AAPM Medical Physics Practice Guideline 5 Methodology

AAPM MPPG5.a presents TPS commissioning and quality assurance as a three phase process.

TPS Commissioning And QA Phases

  1. Data Acquisition and Processing
  2. TPS Validation and Beam Model Iteration
    • During the second phase of commissioning, the physical measurements are applied to the TPS software to produce a model.
    • This model is then tested against additional physical measurements in a variety of simulated real use cases.
    • Results of this testing are used to iteratively refine the model.
    • This section is the primary focus of AAPM TG-106: Accelerator Beam Data Commissioning Equipment and Procedures (External Link)
  3. Ongoing Quality Assurance
    • Finally ongoing quality assurance is assures that the software's beam model remains valid into the future.
    • Checksum files are used to assure no changes to software configuration has occurred.
    • Physical measurements tested against baseline measurements are also used.
Workflow of TPS dose algorithm commissioning, validation, and routing QA. Adapted from AAPM MPPG 5.a figure 1.

Key Point: MPPG 5 is not applicable to the following:

  • Systems that do not use an MLC for beam shaping
  • Commissioning of small field TPS systems (field sizes <2x2cm2)
  • Use of non-commercial TPS systems
  • Sub-MV or bachytherapy TPS systems
  • Non-dosimetric components of TPS systems (e.g. biological response models, dose-volume histograms, image registration or contouring, etc)

Phase 1: Data Acquisition and Processing

CT Calibration Data

CT calibration in the TPS consists of specifying what CT number (Hounsfield Unit) corresponds to what electron or physical density.

Equipment

CT number to density phantom consisting of materials with known physical and electron density.

Density range: Air (~0.001 g/cm3) to dense bone (1.4-1.9 g/cm3). High density materials such as titanium and gold may also be required depending upon clinical use.

Process

  1. CT density phantom is scanned on CT using standard kVp setting.
  2. Measure mean CT number within each density plug.
  3. Create table correlating CT number to electron/physical density within the TPS.

Additional Readings

AAPM TG-66: Quality assurance for computed-tomography simulators (External Link)

Treatment Beam Data

Although MPPG5.a notes the need for proper data acquisition and processing, it largely refers the reader to AAPM TG-106 for information on basic beam data collection.

Note: Although inhomogeneity characterization is important to the TPS commissioning process, it is considered beyond the scope of AAPM TG-106.

Data Required for Beam Model Creation

Minimum Recommended Photon Beam Data

  • Percent Depth Dose (PDD)
  • Field profiles (in-plane and/or cross-plane) as various depths
  • MLC data
    • Interleaf leakage
    • Intraleaf leakage
    • Tongue and Groove
  • Head leakage
  • Total scatter
  • Tray factors (if used)
  • Wedge factors

Minimum Recommended Electron Beam Data

  • Percent Depth Dose (PDD)
  • Field profiles (in-plane and/or cross-plane) as various depths
  • Cone factors
  • Insert factors
  • Virtual source position
    • Found using profile scans in air
Photon percent depth dose (PDD) distribution measured and calculated in TPS.
Photon profiles taken at multiple depths.
Electron percent depth dose (PDD)

IMRT/VMAT Data

Most commercial treatment planning systems extrapolate basic beam measurements to produce their small field dose distributions. Therefore, small field, IMRT, and VMAT measurements are made for the purpose of beam model validation rather than model creation.

MPPG5.a recommended measurements

  • Percent depth dose profiles for field sizes down to 2x2cm2
  • Small field output factors (down to at least 2x2cm2)
  • MLC intraleaf and interleaf transmission, leaf gap
    • Use small ion chamber under leaf
    • Use film for interleaf leakage
  • Leaf tip penumbra profile
    • Use small detector (diode of micro-chamber)
  • Leaf timing for binary MLC systems
    • E.g. Tomotherapy

Evaluation Comparison

All data should be compared to a reference data set from as similar a treatment machine as possible. The intent of this check is to identify potential changes in beam inverse square effect, beam divergence, beam energy, and others. Such changes may indicate an error in measurement and should be closely reviewed prior to inputting the data into the TPS. Data to be compared includes:

  • Crossbeam profiles at multiple depths and field sizes
  • Percent depth dose distributions for varying field sizes
  • MLC transmission factors

Additional Readings

AAPM TG-106: Accelerator Beam Data Commissioning Equipment and Procedures (External Link)

Phase 2: Beam Model Creation

For the purpose of validation and fine tuning, MPPG5.a considers each "configured beam" to by distinguished by a unique energy and treatment head configuration. For example; 6MV, 6MV small field/SRS, 6MV Flattening Filter Free, and 6MV with physical wedge are all considered unique beams requiring individual validation.

AAPM MPPG 5 breaks down validation into 4 categories, listed below, and advocates for an iterative process of validation and model tuning.

Validation Categories

  • Basic Photon Dose Calculation
  • Photon Dose Calculation in Heterogeneous Media
  • Photon Dose Calculation in IMRT/VMAT Setting
  • Electron Dose Calculation

Beam Model Parameters

Photon Parameters

The below parameters are commonly used in Kernel based dose computation algorithms.

Photon Energy Spectrum

The energy spectrum is commonly modeled as a set of descrete energies. This is because each energy commonly has a unique, monte carlo generated, dose deposition kernel.

Photon energy spectrum fit is dominated by PDD fitting, often with a focus on the reference field size (i.e. 10x10cm2).

Photon and electron energy spectrum for a 6MV photon beam model.
Optimal source size matches penumbra in small field measurements.

Electron Energy Spectrum

Electron contamination of the photon beam must be modeled for accurate dose calculation in the superficial region. Because there is no Kernel for electron dose distribution, a continuous energy distribution may be used.

Electron fitting focuses on the superficial region of the PDD curves.

Primary Source Geometry

The location and shape of the primary photon source is important for accurate ray tracing.

Source location: Taken to be the physical location of the target in the treatment head.

Source profile: May represent a physical source size or may be optimized to fit small field measurements. The profile is often represented as a circular or elliptical Gaussian.

Typically 0.5-3mm diameter.

Off-axis Softening

Because of differential attenuation within the flattening filter, flattened clinical beams are soften (become lower in mean energy) with increasing radial distance from the beam axis. This causes the lateral portions of a clinical beam to attenuate more rapidly than the central portions. This effect must be modeled in software.

Off-axis softening optimization focuses on large field profiles at depth near dmax in the 90-100% of maximum dose range.

Collimation (Jaw and MLC) Parameters

The treatment planning system must accurately model position and attenuation of the collimation devices used in planning.

Positioning: Positioning is typically optimized by matching the full-width-at-half-max value at the set SSD (typically 100cm).

Attenuation: Attenuation is found averaging large out of field, blocked, measurements.

Output Factor Corrections

Output factor corrections may be used for variable field sizes and beam modifier such as dynamic or physical wedges.

Why output factor corrections with Ray-Tracing and modeled attenuation?

Ray-tracing follows the beam path from a voxel back to the source and includes the impact of attenuation. These effects are NOT included in TPS output factor corrections. Rather, these corrections account for differences in scatter counts to the monitor chamber. Changes in the number of these scatter counts influences when the monitor chamber shuts off the beam and this effect is modeled using output factor corrections.

For example, the jaws smaller fields scatter more electrons back into the monitor chamber. This causes the monitor chamber to read more dose than is actually delivered. In effect, the monitor chamber shuts off the beam "early" for smaller fields.

Output factor corrections generally increase from low field sizes to large field sizes, compensating for changes to scatter counts at the monitor chamber.

Key Point: Beam model parameters vary from vendor-to-vendor and, hence, is not extensively covered in MPPG 5. OMP presents only a set of common parameters and their impact dose calculation.

Basic Photon Dose Calculation Tests

All basic photon dose calculation tests are computed and measured in unit density phantoms with simple beam arrangements.

Barely meeting tolerances in this section may result in tolerance failures during heterogeneity or IMRT/VMAT testing. It is recommended, therefore, that some model adjustment be made to optimize the results of this section even if the model is already within tolerance.

Key Point: There are two categories of basic photon dose tests.

  1. Tests comparing the TPS model to data used directly in commissioning.
  2. Tests comparing the TPS model to plan measurements that vary by a small number of variables from commissioning data plans.

Comparisons between TPS model and data used in commissioning

5.1 Dose Distribution in planning module vs modeling (physics) module

Purpose: To assure that the beam model is identical between TPS modules

Test: Compare planning module dose calculation to physics module dose calculation for the same large (>30x30cm2) field in simulated water.

Tolerance: Identical

5.2 Dose in test plan vs clinical condition

Purpose: Confirm that the TPS can accurately reproduce simple reference beam measurements.

Test: The TPS should calculate dose to the reference point using beam calibration conditions (I.e. TG-51). TPS calculation should be compared to the measured data used in TPS commissioning.

Tolerance: 0.5%

5.3 Dose distribution calculated in planning system vs commissioning data

Purpose: Confirms that planning module dose calculated in water phantom matches commissioning data.

Test: Calculate dose for large and small field commissioning data in planning module. Compare calculated PDD and off-axis ratios to commissioning data.

Tolerance: 2%

Comparisons between TPS model and homogeneous non-commissioning data

Tests in this section continue to compare TPS calculation to measurement for water phantoms but now the physical measurements are not directly used in commissioning the beam model. These plans change only a small number of parameters from commissioning plans.

Adapted from AAPM MPPG5.a Table 4.
Test Sample Test from:
IAEA TRS-430 (External Link)
5.4 Small MLC-shaped field (non-SRS) Photon Test 1
5.5 Large MLC-shaped field with extensive blocking (e.g. mantle) Photon Test 3
5.6 Off-axis MLC shaped field, with maximum allowed leave over travel Photon Test 2
5.7 Asymmetric field at minimal anticipated SSD Photon Test 6
5.8 10x10cm2 field at oblique incidence (≥20°) Photon Test 10
5.9 Large (>15cm) field for each nonphysical wedge angle  None

Remembering MPPG5.a Basic Dose Calculation Tolerances

Condition Tolerance
Comparing calculations different TPS modules (Test 5.1) Identical
Reference dose per MU (Test 5.2) 0.5%
High dose region:

≤1 parameter changes between test conditions and reference data conditions

2%
High dose region:

>1 parameter changes between test conditions and reference data conditions

5%
Penumbra region Distance to agreement: 3mm
Low-dose tail 3% of maximum field dose

Photon Dose Calculation in Heterogeneous Media

Validation of photon dose calculation in heterogeneous media has two parts:

  1. Direct validation that the system correctly maps physical or electron density in a planning CT.
  2. Validating that dose is correctly calculated in the following areas
    1. Within low density tissue
    2. Near the interface of heterogeneous tissues (low and high density)
    3. Beyond the heterogeneous tissues (low and high density)

Recommendations for dose validation in heterogeneous media

  • A 5x5cm2 field size is recommended because small fields enhance the dosimetric impact of low-density materials.
  • Measurements should not be made directly at the interfaces of inhomogeneities because these are build-up/build-down regions.
  • The inhomogeneity should cause at least a 10% dose correction compared to the homogeneous phantom.
  • Edits to the beam model as a result of failures in this section require repeated validation of the basic photon calculation tests.
Phantom setup for MPPG5 heterogeneity test 6.2.
Adapted from AAPM MPPG5.a Table 6
Test Description Tolerance
6.1 Validate planning system reported physical or electron density. CT-density calibration for air, lung, water, dense bone, and other tissue types. -
6.2 Heterogeneity correction distal to lung tissue. Calculate and measure dose above and below a lung density inhomogeneity. 3%

Photon Dose Calculation in IMRT/VMAT Setting

IMRT/VMAT validation tests recommended in AAPM MPPG5.a draw heavily from AAPM TG-119.

Recommendations for dose validation in IMRT/VMAT setting

  • Field sizes down to at least 2x2cm2 must be investigated even if such data is not required by the TPS.
  • AAPM TG-119 found that agreement to within 3% of prescription dose is generally appropriate.
  • 2%/2mm gamma index evaluation is recommended over the more common 3%/3mm because it is able to more readily identify areas of concern.
  • Investigation should focus on highly modulated IMRT/VMAT plans as these will be most impacted by subtle errors in MLC model parameters.
  • There is significant disagreement in the the required testing for an IMRT/VMAT program. MPPG5 tests recommend only the minimum and more testing may be appropriate in a given clinical setting.

IMRT/VMAT TPS Validation Tests

Adapted from AAPM PMMG5.a Table 7.
Test Description Recommended Detectors
7.1 Verify small field PDD Plan and measure PDD for fields at least as small as 2x2cm2 Diode detector
Plastic scintillator
7.2 Verify small MLC defined field output factors Use small MLC defined fields Diode detector
Plastic scintillator
Mini/micro-ion chamber
7.3 TG-119 tests Head-and-neck and C-shaped test plans recommended Ion chamber
Film
Detector array
7.4 Clinical Tests At least 2 clinically relevant cases should be planned, measured, and an in-depth analysis should be performed. Ion chamber
Film
Detector Array
7.5 External Review Simulate, plan, and treat an anthropomorphic phantom with embedded dosimeters.

Most commonly, this is performed using the IROC Houston Head and Neck tests as well as the Thoracic tests.

If IROC testing is not possible, the minimal acceptable validation is to have another Qualified Medical Physicist perform and independent evaluation.

Anthropomorphic phantom with embedded dosimeters

IMRT/VMAT TPS Validation Test Tolerances

Adapted from AAPM MPPG5.a table 8.
Measurement Method Region Investigated Tolerance
Ion Chamber Low-gradient (uniform dose) region 2% of prescribed dose (Targets)

3% of prescribed dose (Regions/Organs-at-Risk)

Planar/Volumetric Detector Arrays All Regions No tolerance

Note: The 2%/2mm gamma index should be investigated  but not pass rate tolerance is recommended.

End-to-end testing Low-gradient (uniform dose) region 5% prescribed dose
IROC IMRT head-and-neck phantom.

Electron Dose Calculation

Electron validation is relatively simple compared to a full photon validation. Important aspects of electron dosimetry and treatment planning are noted by AAPM TG-25 and AAPM TG-70.

AAPM TG-70 states that electron beam treatment planning should be:

  • CT based
  • Employ 3D heterogeneity correction
  • Use a pencil beam algorithm or better

Recommendations for electron dose validation

  • Create plots of PDD and output factors for all standard cutout sizes at each energy.
    • This is used to confirm correct qualitative behavior of energy and field size.
  • Clinical use of non-routine electron fields (e.g. abutting fields, small fields, etc) will require additional verification to understand the limits of the electron model.

Electron TPS Validation Tests

Adapted from AAPM MPPG5.a Table 9.
Test Description Tolerance
8.1 Basic model verification with shaped fields Measure PDD and output factor for cutouts at standard and extended SSDs. 3%/3mm
8.2 Surface irregularities obliquity Oblique incidence using reference cone and nominal SSD. 5%
8.3 Inhomogeneity test Use reference cone and SSD in heterogeneous phantom. 7%

Phase 3: Ongoing Quality Assurance

Once TPS commissioning and validation is complete, the final step is to create an ongoing quality assurance program which achieves the following goals:

  1. Verifies that the TPS remains in the commissioned configuration.
    1. This may be accomplished using a "check sum" file
  2. Verifies that the TPS continues to correctly calculate dose following and TPS upgrades.
    1. This is a comparison of calculated doses after commissioning to post TPS upgrade calculated dose.
    2. Calculations should be compared at the 1%/1mm level.
    3. Selected plans should be compared for all commissioned beams.

What is a checksum file?
A checksum is a small unique code derived from a large data set. A checksum created from the original commissioned data set can be compared to a new checksum file during routine quality assurance. If both checksum files are the same, it is safe to conclude that the commissioned data has not been edited.

Knowledge Test

1. At what point is  a treatment plan considered to be Pareto-optimal?

Question 1 of 3

2. What is a phase space file?

Question 2 of 3

3. What measurement is needed for electron beam TPS commissioning that is not needed for photon beam commissioning?

Question 3 of 3


 

This is the sample version of the full quiz. Log in or register to gain access to the full quiz.

Not a Premium Member?

Sign up today to get access to hundreds of ABR style practice questions.