Rain fade refers primarily to the absorption of a microwave radio frequency (RF) signal by atmospheric rain, snow, or ice, and losses which are especially prevalent at frequencies above 11 GHz. It also refers to the degradation of a signal caused by the electromagnetic interference of the leading edge of a storm front. Rain fade can be caused by precipitation at the uplink or downlink location. It does not need to be raining at a location for it to be affected by rain fade, as the signal may pass through precipitation many miles away, especially if the satellite dish has a low look angle. From 5% to 20% of rain fade or satellite signal attenuation may also be caused by rain, snow, or ice on the uplink or downlink antenna reflector, radome, or feed horn. Rain fade is not limited to satellite uplinks or downlinks, as it can also affect terrestrial point-to-point microwave links (those on the Earth's surface).
Rain fade is usually estimated experimentally and also can be calculated theoretically using scattering theory of raindrops. Raindrop size distribution (DSD) is an important consideration for studying rain fade characteristics.[1] Various mathematical forms such as Gamma function, lognormal or exponential forms are usually used to model the DSD. Mie or Rayleigh scattering theory with point matching or t-matrix approach is used to calculate the scattering cross section, and specific rain attenuation. Since rain is a non-homogeneous process in both time and space, specific attenuation varies with location, time and rain type.
Total rain attenuation is also dependent upon the spatial structure of rain field. Horizontal, as well as vertical, extension of rain again varies for different rain type and location. Limit of the vertical rain region is usually assumed to coincide with 0˚ isotherm and called rain height. Melting layer height is also used as the limits of rain region and can be estimated from the bright band signature of radar reflectivity.[2] The horizontal rain structure is assumed to have a cellular form, called rain cell. Rain cell sizes can vary from a few hundred meters to several kilometers and dependent upon the rain type and location. Existence of very small size rain cells are recently observed in tropical rain.[3]
The rain attenuation on satellite communication can be predicted using rain attenuation prediction models which lead to a suitable selection of the Fade Mitigation Technique (FMT).[4] The rain attenuation prediction models require rainfall rate data which, in turn, can be obtained from in either the prediction rainfall maps, which may reflect inaccurate rain performance prediction, or by actual measured rainfall data that gives more accurate prediction and hence the appropriate selection of FMT. Substantially, the earth altitude above the sea level is an essential factor affecting the rain attenuation performance.[5] The satellite system designers and channel providers should account for the rain impairments at their channel setup.
Possible ways to overcome the effects of rain fade are site diversity, uplink power control, variable rate encoding, and receiving antennas larger than the requested size for normal weather conditions.
The simplest way to compensate the rain fade effect in satellite communications is to increase the transmission power: this dynamic fade countermeasure is called uplink power control (UPC).[6] Until more recently, uplink power control had limited use, since it required more powerful transmitters – ones that could normally run at lower levels and could be increased in power level on command (i.e. automatically). Also uplink power control could not provide very large signal margins without compressing the transmitting amplifier.[7] Modern amplifiers coupled with advanced uplink power control systems that offer automatic controls to prevent transponder saturation make uplink power control systems an effective, affordable and easy solution to rain fade in satellite signals.[8]
In terrestrial point to point microwave systems ranging from 11 GHz to 80 GHz, a parallel backup link can be installed alongside a rain fade prone higher bandwidth connection.[9] In this arrangement, a primary link such as an 80 GHz 1 Gbit/s full duplex microwave bridge may be calculated to have a 99.9% availability rate over the period of one year.[10] The calculated 99.9% availability rate means that the link may be down for a cumulative total of ten or more hours per year as the peaks of rain storms pass over the area. A secondary lower bandwidth link such as a 5.8 GHz based 100 Mbit/s bridge may be installed parallel to the primary link, with routers on both ends controlling automatic failover to the 100 Mbit/s bridge when the primary 1 Gbit/s link is down due to rain fade. Using this arrangement, high frequency point to point links (23 GHz+) may be installed to service locations many kilometers farther than could be served with a single link requiring 99.99% uptime over the course of one year.[11]
It is possible to extrapolate the cumulative attenuation distribution at a given location by usingthe CCIR interpolation formula:[12]
Ap = A001 0.12 p-(0.546 - 0.0043 log10 p).
where Ap is the attenuation in dB exceeded for a p percentage of the time and A001 is the attenuation exceeded for 0.01% of the time.
According to the ITU-R,[13] rain attenuation statistics can be scaled in frequency in the range 7 to 55 GHz by the formula
A2 | |
A1 |
=\left(
b2 | |
b1 |
\right)
1-1.12 ⋅ 10-3\sqrt{b2/b1 | |
(b1A
0.55 | |
1) |
where
bi=
| ||||||||||||
|
and f is the frequency in GHz.