This post is annoyingly long. It's a primer on neutron detection and its application to nuclear reactor safety.
It's critically important to know precisely what power level your reactor is at. The reactor is perfectly capable of producing more than enough heat to damage itself, and you don't want it to do that. It's also important to know how rapidly power level is increasing during a start-up, so that you don't have a power spike that blows right thought a thermal limit.
Reactors are designed and modeled with the thermal limits of the fuel elements in mind. These limits are based on the ability of the reactor coolant to remove heat from the fuel, the temperature at which damage to the fuel cladding will occur, and system conditions under many potential accident scenarios. The goal of this modeling is to prevent fuel damage in the event of any conceivable accident.
One of the key assumptions of modeling for accidents is that the reactor is being operated within design limits. One of those limits is reactor power, because that is what generates the heat.
To know what reactor power is doing, we have a couple of available methods.
One method is calculating the heat output, based on standard thermodynamic laws. You assume that every bit of the energy produced in the reactor core is being been carried away as heat in the steam. If this were not the case, reactor temperature would have to increase or decrease. Therefore if the reactor temperature is neither increasing or decreasing, you are in equilibrium, and heat added by the reactor is equal to the heat removed in the steam.
With that assumption in mind, you measure steam temperature, pressure, and flow under perfectly steady-state conditions. Then you perform the heat transfer calculation - you measure the enthalpy increase caused by the reactor. It might not seem apparent, but the feedwater that is returned to the steam generator to be boiled also contains a certain amount of latent energy. This energy was not added by the reactor, and thus must be subtracted from the energy measured in the steam.
In equation form then: (where Q = heat)
Q Reactor = Q steam - Q feedwater
Does that make sense? Feedwater comes in kind of warm. It has energy - not much, but we still need to measure and subtract that heat. The reactor adds an unknown amount of of heat, which boils the feedwater in a steam generator. Next, we measure the temperature, pressure and flow of the steam. The energy difference between the feedwater and steam tells us how much heat the reactor added. By using this simple heat equation, we can then determine reactor power.
Below: We are measuring the increase in energy content and flow of the fluid between point 2 and 3. That is all.
There are some caveats to using this method. For one thing, the above calculation will only be accurate if conditions are perfectly steady-state. If steam flow or feedwater flow are irregular, or if the feedwater temperature is changing, or if reactor power is changing slightly during the measurement, then errors will be introduced that will cause your calculated reactor power to be incorrect. For a real-time reactor power measurement, this calculation is not ideal.
Another caveat is that the reactor has to be running at 50-100% of its rated power to get valid data. That's fine for determining how hard you can run the reactor, but this method won't work when the reactor is starting up and not yet generating heat.
Reactors only generate heat between 1% and 100%, and you have to be sure you don't exceed 100% power. You also need to know what the power level is during start-up, even if the power level is only a millionth of 1% and climbing. So how on earth do we measure that tiny fraction of a percent?
With that out of the way, we can finally get to the other method for determining reactor power: Neutron Detectors.
Free neutrons near the reactor core are an excellent proxy for reactor power. Without them, there is no nuclear reaction, and therefore zero reactor power. No neutrons, no fission. Better yet, neutron population is proportional to the number of fissions occurring in the core. For this reason, neutron count is the preferred means of measuring reactor core power. Bear in mind that a reactor can be critical (self-sustaining fission chain reaction) without producing any noticeable heat.
Nuclear reactors have extremely wide-ranging power levels, ranging from 1x10^-11 to 100%. No single detector is capable of giving an accurate output over such a wide scale. For this reason we use different types of neutron detectors that are most accurate in their own power band.
The ranges these detectors operate at - which have some overlap - are called:
- Source Range - measured in counts/second (CPS) roughly 1 CPS to 1,000,000 CPS
- Intermediate Range - measured in amps (we will see why shortly) 10^-11 to 10^-3 Amps. (In decimal form: 0.00000000001 amps to 0.001 amps)
- Power Range - 1% - 100%
As a precursor to discussing how we detect neutrons, we have to understand what happens when a low pressure gas becomes ionized in an electric field. Ionized gas is the most common technique for detecting and quantifying incoming radiation, including neutrons.
Below is a simple drawing of how a gas-filled detector works. DC voltage is applied to a small cylinder that is filled with gas at reduced pressure. Positive voltage is applied to a center filament inside a chamber, and negative voltage is applied to the outside of the chamber. When incoming radiation ionizes the gas, the applied voltage draws the ion pairs away from one another and creates an electrical current that can be measured. The gas has to be at reduced pressure, or the gas will be so dense that the ion pairs will not drift away from each other.
Radiation measurement gets a little more interesting than described by the diagram above. Now we will discuss gas amplification, which is fascinating!
If you look at the graph below, you will see that the bottom scale is "applied voltage". The left scale is the number of ion pairs detected by a single ionizing event. That "event" would be external radiation knocking an electron off an atom of gas in our detector. The bottom scale is how much voltage we are applying between our central electrode and the outside of our chamber. This graph explains the basis of gas amplification by applying greater voltage to our detector chamber.
The chart is divided into six Regions. The amount of DC voltage we apply to the electrodes in our detector determine what happens to the ion pairs that are created by the external radiation.
- Region 1: Recombination region - The voltage applied is so low that the electron makes its way back to the atom it was knocked loose from.
- Region 2: Ionization region - The voltage is now high enough that the ionized atom and electron are pulled away from each other. The positively charged atom migrates to the cathode and the electron migrates to the anode. This detector will create a very tiny output current that reflects how much radiation is ionizing the gas. Amplification of this tiny signal by external circuits is necessary.
- Region 3: Proportional region - In this region the voltage is high enough that the ion pairs moving towards the electrodes have enough energy to knock electrons off other atoms in the gas, called secondary ionization. These in turn can create more ion pairs. This process is called "gas amplification". With this applied voltage, you get a lot of output current that is proportional to the amount of radiation hitting the detector. It is important to maintain very tight control of the voltage on this type of detector, because the applied voltage has a huge influence on the amount of gas amplification.
- Region 4: Limited Proportional - In this region the consistent ratio of applied voltage to ion pair production begins to break down, and so it is not useful for making consistent measurements.
- Region 5: Geiger-Mueller region: In this region the applied voltage is so high that a single ionizing event causes nearly every atom of gas in the detector to rapidly ionize, and creates a large cascade and a brief pulse of current output. However the applied voltage is still low enough that following the cascade, the atoms can recombine with electrons, resetting the detector for another event. This would be your classic clicking radiation detector.
- Region 6: Continuous discharge region: In this region, the voltage applied is so high that the gas is continuously ionized regardless of ionizing events. This would be neon sign territory, and not useful for detecting radiation.
Neutrons are slightly more difficult to detect than alpha, beta and gamma radiation, and that's because they don't normally ionize most matter. The tend to wander right though materials until they are absorbed by a nucleus, which may or may not make the target atom radioactive. Neutrons don't carry an electric charge like alpha and beta particles, so they won't typically ionize gas inside a Geiger counter, or create a flash of light in a crystal with a scintillation detector.
To build a neutron detector, we need something that readily reacts with a neutron and ejects an energetic particle. That something is Boron-10.
Below, the nuclear reaction equation for a Boron 10 nucleus and a thermal neutron.
Neutron + Boron-10 ---> Lithium-7 + alpha particle (both ionized) + 2.31 MeV
Unfortunately elemental Boron is not a gas, it's a solid. However, we can make a gaseous compound of Boron by reacting it with Fluorine. The Boron Trifluoride (or BF3) detector works just like any other radiation detector, except that it is also sensitive to neutrons. This type of detector is very sensitive and is useful at very low neutron counts when the reactor is shut down or just starting up. For this reason, we use BF3 counters that operate in Region V (the Geiger-Mueller range) for measuring reactor power in the Source Range.
Now I already know what you are thinking here: Gamma and beta radiation will *also* ionize this BF3 gas, and you won't know whether the output pulse of the detector is due to a neutron, or whether it is due to gamma radiation. As we know, fission product build-up in the fuel generates quite a bit of gamma and beta radiation. And you would be correct that these will also ionize the gas in our detector - but there's a solution for that.
When gamma or beta radiation interacts with matter, it frees an electron and creates an electron-ion pair. Those don't have a lot of energy - a few electron volts. On the other hand, when a neutron reacts with a Boron-10 nucleus, it fragments into two very heavy charged nuclei, which zoom off with a combined energy of 2.31 Million electron volts. This creates a much larger ion pair cascade - and output pulse - than an ion pair created by beta or gamma reactions. It's easy to simply filter out all the low-energy events associated with gamma and beta radiation, while counting the high-energy pulses associated with a neutron interaction.
As reactor power and neutron population increases, the source range BF3 detector becomes saturated. So many pulse events occur so rapidly, that the tube becomes continuously ionized. Voltage to this detector is shut off to prevent damage at this point. By this time, the Intermediate Range detectors have already begun to register, and we continue the start-up by watching these. Intermediate range detectors detect neutrons by operating in Region II, the ionization region. This detector will take us up to the Power Range.
The Intermediate Range detector is typically a Fission Chamber or a Compensated Ion Chamber. Let's discuss the Fission Chamber first, because it is a simpler design.
Suppose we were to coat the inside of our detector with a thin layer of U-235, fill it with gas, and place our high voltage on a wire at the center of the can. That's a fission chamber.
Every time a neutron causes a fission in our can, two VERY high-energy split nuclei will blast into the gas, ionizing everything in their path. The applied voltage will accelerate the newly-ionized gas molecules into other molecules, and generate an output current that is proportional to the number of neutrons that caused the fission inside the detector. Because the signal generated by the fission fragments is so large (approximately 200 MeV), it dwarfs any signal caused by beta and gamma radiation. Nevertheless, this detector does not compensate for them, so it is slightly less accurate than the other detector we will discuss shortly. The fission chamber is very versatile, and has a wide operating range. It can also be used in the power range.
The other Intermediate Range detector has a more limited measurement range than the fission chamber, but it is more accurate, because it subtracts the gamma and beta radiation signal from the neutron signal. This is significant for reactor safety, especially at lower neutron levels, because gamma and beta noise can mask the neutron signal that indicates the *actual* reactor power level.
This Intermediate Range neutron detector is called a Compensated Ion Chamber. In this detector, the inside of a chamber is coated with boron. The detector is operated in the ionization region. In the Intermediate Range, neutrons are plentiful enough that gas amplification is not necessary to detect the signal. However this design compensates for gamma and beta radiation, and how this is accomplished is quite clever.
A second chamber *without* boron is placed inside the other chamber, but with reversed electrical polarity. The two chambers are then connected electrically. The boron-coated chamber's electrical output is the combined interactions of neutrons, beta and gamma. The non-coated chamber's output is due only to beta and gamma, and this is electrically subtracted from the coated chamber. See diagram below to get a better idea of how this arrangement works.
Once we reach the Power Range, we shut off voltage for the Intermediate Range detectors and switch to a Boron-lined Uncompensated Ion Chamber. At this power level it is no longer necessary to use a compensating detector. Once we reach 1% reactor power, neutron signal dwarfs the gamma and beta signal. Also at operating power levels, gamma and beta radiation are proportional to neutron levels, so compensation is unnecessary for that reason as well.
So how do we use this information that neutron detectors provide? Monitoring and Reactor Safety.
Several independent channels of neutron detectors are used to determine when the reactor is "critical", or has a self-sustaining reaction. Before starting a reactor, you do a lot of calculations to determine exactly how much the control rods will have to be removed to achieve criticality. Each small pull of the control rods will cause neutron levels to increase, but if the reactor isn't critical, neutron levels will drop off again. When the control rods are partially removed and neutron levels are constant, the reactor is self-sustaining or "critical". This usually happens low in the intermediate range - well below the point where the reactor generates any heat.
Suppose that you goofed on your calculation though, and pulled the control rods out too far. Power level could increase exponentially, and before you could move those control rods in, reactor power would blast right through the 100% level and explode. This has been done experimentally. See the video below for the results of an intentional reactivity excursion.
This is where reactor protection comes in. A high Start-up Rate (power increasing too rapidly) will activate the reactor protection circuits, and release the control rods back into the core. This is usually arranged so that a single false reading does not cause a trip. The circuits are typically set up to trip if 2 out of 3 or 2 out of 4 channels read high start-up rate.
Start up rate can be measured in a couple of ways: Decades per minute (DPM) is a common measurement. It means that reactor power is increasing one decade (x10) each minute. 1 DPM is a fairly conservative start up rate, while 10 DPM is ridiculously fast. DPM is a nice, easy to understand scale.
The other way to measure Start-up Rate is called Reactor Period. This is the time it takes for neutron population to increase logarithmically by e or 2.718. With this measurement, a shorter reactor period indicates a faster start-up rate. In this case, 100 seconds would be a nice leisurely start up rate, but 20 seconds would be extreme and dangerous.
Once you reach the Power Range, things are a little different. You have now reached the power level where the reactor is creating heat. This heat is beneficial from a reactor safety standpoint, because it creates a negative feedback loop. It heats the moderator, which becomes less dense. The moderator becomes less effective at moderating neutrons, and power level drops. The control rods have to be gradually removed to compensate for the moderator becoming less effective.
Power Range reactor protection is mainly about ensuring that the reactor does not exceed 100% power, and ensuring the reactor trips if a single control rod drops. A dropped rod will cause neutron flux tilting. Power will be very low in the region near the dropped rod, and other regions of the core will increase to compensate, possibly overheating them. If the 3-4 power range detectors do not agree, the reactor will trip.
Power range detectors are calibrated and adjusted from time to time by using the original heat transfer calculation at the very top of this post: Q Reactor = Q steam - Q feedwater.
Very thorough post and I will read it later when not drinking beer. But I missed the part where you talk about the neutron source and AMR2UL yanking it back and forth too hard. ;-)
ReplyDeleteLMAO!!! Has anyone not got the neutron source stuck in the tube????
ReplyDeleteI have a really funny story about a neutron source when I worked at a facility with a small swimming pool reactor. This place had an in-core Am-Be neutron source in one of the fuel element slots.
The daily checklist required you to record a bunch of stuff like pool conductivity and water level, etc etc. Then you had to pull the neutron source and place it next to each detector and record the indicated power level on each channel. Then you had to replace the neutron source and take the reactor critical and record the control rod heights. It was just a big rubber stamp that you put in the logbook.
This one dude spaced out and left the neutron source hanging next to one of the detectors, and then began pulling rods for the daily startup check. He got the rods all the way out, and couldn't figure out why one channel was reading 5 watts, but the others weren't doing a thing.
Then there was a bright blue flash as the reactor went prompt critical, after which it scrammed. A lonely neutron had finally made its way over from the neutron source to the fuel and started the reactor up exponentially, because the control rods were all the way out of the core.
The only reason this event didn't make the nightly news is because the reactor was a TRIGA, which is designed to be pulsed. The normal procedure is to have the reactor critical at low power before pulsing, however.