Discrete-Time Signal Processing

W. Kenneth Jenkins , ... Bill J. Hunsinger , in Reference Data for Engineers (Ninth Edition), 2002

Basic Definitions

A continuous-time (CT) signal is a function, s(t), that is defined for all time t contained in some interval on the real line. For historical reasons, CT signals are often called analog signals. If the domain of definition for s(t) is restricted to a set of discrete points tn = nT, where n is an integer and T is the sampling period, the signal s(tn ) is called a discrete-time (DT) signal. Often, if the sampling interval is well understood within the context of the discussion, the sampling period is normalized by T = 1, and a DT signal is represented simply as a sequence s(n). If the values of the sequence s(n) are to be represented with a finite number of bits (as required in a finite state machine), then s(n) can take on only a discrete set of values. In this case, s(n) is called a digital signal. Much of the theory that is used in DSP is actually the theory of DT signals and DT systems, in that no amplitude quantization is assumed in the mathematics. However, all signals processed in binary machines are truly digital signals. One important question that arises in virtually every application is the question of how many bits are required in the representation of the digital signals to guarantee that the performance of the digital system is acceptably close to the performance of the ideal DT system.

Linear CT systems are characterized by the familiar mathematics of differential equations, continuous convolution operators, Laplace transforms, and Fourier transforms. Similarly, linear DT systems are described by the mathematics of difference equations, discrete convolution operators, Z-transforms, and discrete Fourier transforms. It appears that for every major concept in CT systems, there is a similar concept for DT systems (e.g., differential equations and difference equations, continuous convolution and discrete convolution, etc.). However, in spite of this duality of concepts, it is impossible to apply directly the mathematics of CT systems to DT systems, or vice versa.

Many modern systems consist of both analog and digital subsystems, with appropriate analog-to-digital (A/D) and digital-to-analog (D/A) devices at the interfaces. For example, it is common to use a digital computer in the control loop of an analog plant. Analytical difficulties often occur at the boundaries between the analog and digital portions of the system because the mathematics used on the two sides of the interface must be different. It is often useful to assume that a sequence s(n) is derived from an analog signal sa (t) by ideal sampling, i.e.

(Eq. 1) s ( n ) = s a ( t ) | t = n T

An alternative model for the sampled signal is denoted by s*(t) and defined by

(Eq. 2) s * ( t ) = n = + s a ( t ) δ a ( t n T )

where δa (t) is an analog impulse function. Both s(n) and s*(t) are used throughout the literature to represent an ideal sampled signal. Note that even though s(n) and s*(t) represent the same essential information, s(n) is a DT signal and s*(t) is a CT signal. Hence, they are not mathematically identical. In fact, s(n) is a "DT-world" model of a sampled signal, whereas s*(t) is a "CT-world" model of the same phenomenon.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780750672917500303

Fourier Analysis of Discrete-time Signals and Systems

Luis Chaparro , in Signals and Systems Using MATLAB (Second Edition), 2015

11.2.5 Energy/Power of Aperiodic Discrete-time Signals

As for continuous-time signals, the energy or power of a discrete-time signal x [ n ] can be equivalently computed in time or in frequency.

Parseval's Energy Equivalence—If the DTFT of a finite-energy signal x [ n ] is X ( e j ω ) , the energy E x of the signal is given by

(11.20) E x = n = - | x [ n ] | 2 = 1 2 π - π π | X ( e j ω ) | 2 d ω

Parseval's Power Equivalence—The power of a finite-power signal y [ n ] is given by

(11.21) P y = lim N 1 2 N + 1 n = - N N | y [ n ] | 2 = 1 2 π - π π S y ( e j ω ) d ω where S y ( e j ω ) = lim N | Y N ( e j ω ) | 2 2 N + 1 Y N ( e j ω ) = F ( y [ n ] W 2 N + 1 [ n ] ) DTFT of y N [ n ] W 2 N + 1 = u [ n + N ] - u [ n - ( N + 1 ) ] rectangular window

The Parseval's energy equivalence for finite-energy x [ n ] is obtained as follows:

E x = n | x [ n ] | 2 = n x [ n ] x [ n ] = n x [ n ] 1 2 π - π π X ( e j ω ) e - j ω n d ω = 1 2 π - π π X ( e j ω ) n x [ n ] e - j ω n X ( ( e j ω ) d ω = 1 2 π - π π | X ( e j ω ) | 2 d ω .

The magnitude square | X ( e j ω ) | 2 has the units of energy per radian, and so it is called an energy density. When | X ( e j ω ) | 2 is plotted against frequency ω , the plot is called the energy spectrum of the signal, or how the energy of the signal is distributed over frequencies.

Now, if the signal y [ n ] has finite power we have that

P y = lim N 1 2 N + 1 n = - N N | y [ n ] | 2

and windowing y [ n ] with a rectangular window W 2 N + 1 [ n ]

y N [ n ] = y [ n ] W 2 N + 1 [ n ] where W 2 N + 1 [ n ] = 1 - N n N 0 otherwise

we have that

P y = lim N 1 2 N + 1 n = - | y N [ n ] | 2 = lim N 1 2 N + 1 1 2 π - π π | Y N ( e j ω ) | 2 d ω = 1 2 π - π π lim N | Y N ( e j ω ) | 2 2 N + 1 S y ( e j ω ) d ω

Plotting S y ( e j ω ) as a function of ω provides the distribution of the power over frequency. Periodic signals constitute a special case of finite-power signals and their power spectrum is much simplified by their Fourier series as we will see later in this chapter.

The significance of the above results is that for any signal, whether finite energy or finite power, we obtain a way to determine how the energy or power of the signal is distributed over frequency. The plots of | X ( e j ω ) | 2 and S y ( e j ω ) versus ω , corresponding to the finite-energy signal x [ n ] and the finite-power signal y [ n ] are called the energy spectrum and the power spectrum, respectively. If the signal is known to be infinite energy and finite power, the windowed computation of the power allows us to approximate the power and the power spectrum for a finite number of samples.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123948120000115

Discrete Fourier Analysis

Luis F. Chaparro , Aydin Akan , in Signals and Systems Using MATLAB (Third Edition), 2019

11.2.6 Energy/Power of Aperiodic Discrete-Time Signals

As for continuous-time signals, the energy or power of a discrete-time signal x [ n ] can be equivalently computed in time or in frequency.

Parseval's energy equivalence—If the DTFT of a finite-energy signal x [ n ] is X ( e j ω ) , the energy E x of the signal is given by

(11.20) E x = n = | x [ n ] | 2 = 1 2 π π π | X ( e j ω ) | 2 d ω .

Parseval's power equivalence

—The power of a finite-power signal y [ n ] is given by

(11.21) P y = lim N 1 2 N + 1 n = N N | y [ n ] | 2 = 1 2 π π π S y ( e j ω ) d ω where S y ( e j ω ) = lim N | Y N ( e j ω ) | 2 2 N + 1 , Y N ( e j ω ) = F ( y [ n ] W 2 N + 1 [ n ] ) DTFT of y N [ n ] , W 2 N + 1 = u [ n + N ] u [ n ( N + 1 ) ] rectangular window .

The Parseval's energy equivalence for finite energy x [ n ] is obtained as follows:

E x = n | x [ n ] | 2 = n x [ n ] x [ n ] = n x [ n ] [ 1 2 π π π X ( e j ω ) e j ω n d ω ] = 1 2 π π π X ( e j ω ) n x [ n ] e j ω n X ( e j ω ) d ω = 1 2 π π π | X ( e j ω ) | 2 d ω .

The magnitude square | X ( e j ω ) | 2 has the units of energy per radian, and so it is called an energy density. When | X ( e j ω ) | 2 is plotted against frequency ω, the plot is called the energy spectrum of the signal, or how the energy of the signal is distributed over frequencies.

Now, if the signal y [ n ] has finite power we have

P y = lim N 1 2 N + 1 n = N N | y [ n ] | 2

and windowing y [ n ] with a rectangular window W 2 N + 1 [ n ]

y N [ n ] = y [ n ] W 2 N + 1 [ n ] where W 2 N + 1 [ n ] = { 1 N n N , 0 otherwise,

we have

P y = lim N 1 2 N + 1 n = | y N [ n ] | 2 = lim N 1 2 N + 1 [ 1 2 π π π | Y N ( e j ω ) | 2 d ω ] = 1 2 π π π lim N | Y N ( e j ω ) | 2 2 N + 1 S y ( e j ω ) d ω .

Plotting S y ( e j ω ) as a function of ω provides the distribution of the power over frequency. Periodic signals constitute a special case of finite-power signals and their power spectrum is much simplified by their Fourier series as we will see later in this chapter.

The significance of the above results is that for any signal, whether of finite energy or of finite power, we obtain a way to determine how the energy or power of the signal is distributed over frequency. The plots of | X ( e j ω ) | 2 and S y ( e j ω ) versus ω, corresponding to the finite-energy signal x [ n ] and the finite-power signal y [ n ] are called the energy spectrum and the power spectrum, respectively. If the signal is known to be infinite energy and finite power, the windowed computation of the power allows us to approximate the power and the power spectrum for a finite number of samples.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128142042000223

REVIEWS

Wing-Kuen Ling , in Nonlinear Digital Filters, 2007

Relationships among continuous time signals, sampled signals and discrete time signals in the frequency domain

Denote a continuous time signal as x(t) and sampling frequency as f s . Then the sampling period is 1 f s and the continuous time sampled signal is x s ( t ) = x ( t ) δ ( t n f s ) . By taking the continuous time Fourier transform on this sampled signal, we have X s ( ω ) = f s n X ( ω 2 π f s n ) . . Since X s (ω) is periodic with period 2πfs , if X(ω) is bandlimited within (–πfs , πfs ), then X(ω) can be reconstructed via a simple lowpass filtering with passband of the filter (–πfs , πfs ). Hence, if a signal is bandlimited by (–πfs , πfs ), fs is the minimum sampling frequency that can guarantee perfect reconstruction.

This frequency is called the Nyquist frequency. As x ( t ) n δ ( t n f s ) = n x ( n f s ) δ ( t n f s ) , by taking continuous time Fourier transform on both sides, we have f s n X ( ω 2 π f s n ) = n x ( n f s ) e j ω n f s . Denote a discrete time sequence as x ( n f s ) , taking the discrete time Fourier transform on this discrete time sequence, we have X D ( ω ) = n x ( n f s ) e j ω n . Hence, we have X D ( ω f s ) = X s ( ω ) = f s n X ( ω 2 π f s n ) .

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123725363500028

Introduction to Digital Signal Processing

Winser Alexander , Cranos Williams , in Digital Signal Processing, 2017

1.1 Introduction

Advances in digital circuit and systems technology have had a dramatic impact on modern society related to the use of computer technology for many applications that affect our daily lives. These advances have enabled corresponding advances in digital signal processing (DSP) which have led to the use of DSP for many applications such as digital noise filtering, frequency analysis of signals, speech recognition and compression, noise cancellation and analysis of biomedical signals, image enhancement, and many other applications related to communications, television, data storage and retrieval, information processing, etc. [1].

A signal can be considered to be something that conveys information [2]. For example, a signal can convey information about the state or behavior of a physical system or a physical phenomena, or it can be used to transmit information across a communication media. Signals can be used for the purpose of communicating information between humans, between humans and machines, or between two or more machines. The information in a signal is represented as variations in the patterns for some quantity that can be manipulated, stored, or transmitted by a physical process [3]. For example, a speech signal can be represented as a function of time, and an image can be represented as a function of two spatial variables. The speech signal can be considered to be a one-dimensional signal because it has one independent variable, which is time. The image can be considered to be a two-dimensional signal because it has two independent variables such as width and height. It is common to use the convention of expressing the independent variable for one-dimensional signals as time, although the actual independent variable may not be time. This convention will generally be used in this text.

The independent variable(s) for a signal may be continuous or discrete. A signal is considered to be a continuous time signal if it is defined over a continuum of the independent variable. A signal is considered to be discrete time if the independent variable only has discrete values. The values of a discrete time signal are often quantized, for many practical applications, to obtain numbers that can be represented for use in a digital circuit or system. A quantized, discrete time, signal is considered to be a digital signal. Thus, if both the independent and dependent variables are only defined at discrete values, then the signal is considered to be a digital signal. Digital signals can be represented as a sequence of finite precision numbers.

Signals play an important role in many activities in our daily lives. Signals such as speech, music, video, etc., are routinely encountered. A signal is a function of an independent variable such as time, distance, position, temperature, and pressure. For example, the speech and music we hear are signals represented by the air pressure at a point in space as a function of time. The ear converts the signal into a form that the brain can interpret. The video signal in a television consist of a sequence of images called frames and each frame can be considered to be an image. The video signal is a function of three variables: two spatial coordinates and time.

The independent variables such as time, distance, temperature, etc., for many of the signals we interact with daily, can be considered to be continuous. Signals with continuous independent variables are considered to be continuous time signals. Advances in computer and digital systems technology have made it practical to sample and quantize many of these signals and process them using digital circuits and systems for practical applications. The processing of signals using computers and other digital systems is called digital signal processing. Digital signal processing involves the sampling, quantization and processing of these signals for many applications including communications, voice processing, image processing, digital communications, the transfer of data over the internet, and various kinds of data compression.

Many applications that involve continuous time signals are implemented using digital signal processing. The continuous time signals are quantized and coded in digital format to be processed by digital circuits and systems. The output from these digital systems is then either stored for later use of converted to continuous time signals to meet the requirements of the application. There are many reasons why digital signal processing has become a cost effective approach to implement many applications including speech processing, video processing and transmission, transmission of signals over communications media, and data retrieval and storage. Some of these reasons follow [4]:

1.

A programmable digital system provides the flexibility to configure a system for different applications. The processing algorithm can be modified by changing the system parameters or by changing the order of the operations through the use of software. Reconfiguring a continuous time system often means redesigning the system and changing or modifying its components.

2.

Tolerances in continuous time or analog system components make it difficult for a designer to control the accuracy of the output signal. On the other hand, the accuracy of the output signal for a digital system is predictable and controllable by the type of arithmetic used and the number of bits used in the computations.

3.

Digital signals can be stored in digital computers, on disks or other storage media, without the loss of fidelity beyond that introduced by acquiring the signal through some process such as converting a continuous time signal to a digital signal. Storage media for continuous time signals are prone to the loss of signal accuracy over time and/or to the addition of noise due to surroundings.

4.

Digital implementation permits the easy sharing of a given processor among a number of signals by timesharing. Several digital signals can be combined, as one, using multiplexing. The multiplexed signal can then be processed by a single processor as needed for a particular application. The corresponding individual outputs can then be separated from the output of the digital system with the results being the same as if the signals were processed by different systems. This permits the use of a single high speed digital system to process several different digital signals with relatively low sampling frequencies.

5.

Digital signal processing can be used to easily process very low frequency signals such as seismic signals. Continuous time processing of these signals would require very large components such as large capacitors and/or large inductors.

6.

The implementation cost of digital systems is often very low due to the manufacture of a large number of microprocessors or microchips with a single design. This has made it very cost effective to implement digital systems that can take advantage of being manufactured in large quantities.

7.

Encryption can be used to provide security with digital signals. This is important for internet security as well as security for wireless communications and the protection of personal data.

There are some disadvantages associated with digital signal processing:

1.

A digital signal processing system, for a particular application, is often more complicated than a corresponding analog signal processing system.

2.

The upper frequency that can be represented for digital systems is determined by the sampling frequency. Thus, continuous time systems are still used for many high frequency applications.

3.

Digital systems use active circuits that consume power. Analog systems can be designed that use passive circuits which can result is the design of a system that consumes less power than a corresponding digital system.

Discrete time signal processing is used in many applications considered to be in the category of information technology. Information technology includes such diverse subjects as speech processing, image processing, multimedia applications, computational engineering, visualization of data, database management, teleconferencing, remote operation of robots, autonomous vehicles, computer networks, simulation and modeling of physical systems, etc. Information technology, which is largely based upon the use of digital signal processing concepts, is essential for solving critical national problems in areas such as fundamental science and engineering, environment, health care, and government operations.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128045473000012

Sampling Theory

Luis F. Chaparro , Aydin Akan , in Signals and Systems Using MATLAB (Third Edition), 2019

8.3.3 Sampling, Quantizing and Coding With MATLAB

The conversion of a continuous-time signal into a digital signal consists of three steps: sampling, quantizing and coding. These are the three operations an A/D converter does. To illustrate them consider a sinusoid x ( t ) = 4 cos ( 2 π t ) . Its sampling period, according to the Nyquist sampling rate condition, is

T s π / Ω m a x = 0.5 s/sample

as the maximum frequency of x ( t ) is Ω m a x = 2 π . We let T s = 0.01 (s/sample) to obtain a sampled signal x s ( n T s ) = 4 cos ( 2 π n T s ) = 4 cos ( 2 π n / 100 ) , a discrete sinusoid of period 100. The following script is used to get the sampled x [ n ] and the quantized x q [ n ] signals and the quantization error ε [ n ] (see Fig. 8.14).

Figure 8.14

Figure 8.14. A period of sinusoid x ( t ) = 4 cos ( 2 π t ) (left-top), sampled sinusoid using T s  = 0.01 (right-top), quantized sinusoid using 4 levels (left-bottom), quantization error (right-bottom) 0 ≤ε ≤ Δ = 2.

The quantization of the sampled signal is implemented with our function quantizer, which compares each of the samples x s ( n T s ) with 4 levels and assigns to each the corresponding level. Notice the approximation of the values given by the quantized signal to the actual values of the signal. The difference between the original and the quantized signal, or the quantization error, ε ( n T s ) , is also computed and shown in Fig. 8.14.

The binary signal corresponding to the quantized signal is computed using our function coder which assigns the binary codes '10', '11', '00' and '01' to the 4 possible levels of the quantizer. The result is a sequence of 0s and 1s, each pair of digits sequentially corresponding to each of the samples of the quantized signal. The following is the function used to effect this coding.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128142042000193

Sampling Theory

Luis Chaparro , in Signals and Systems Using MATLAB (Second Edition), 2015

8.3.3 Sampling, Quantizing, and Coding with MATLAB

The conversion of a continuous-time signal into a digital signal consists of three steps: sampling, quantizing, and coding. These are the three operations an A/D converter does. To illustrate them consider a sinusoid x ( t ) = 4 cos ( 2 π t ) . Its sampling period, according to the Nyquist sampling rate condition, is

T s π / Ω max = 0.5 sec / sample

as the maximum frequency of x(t) is Ω max = 2 π . We let T s   =   0.01 (sec/sample) to obtain a sampled signal x s ( nT s ) = 4 cos ( 2 π nT s ) = 4 cos ( 2 π n / 100 ) , a discrete sinusoid of period 100. The following script is used to get the sampled x[n] and the quantized x q [n] signals and the quantization error ε[n] (see Figure 8.13).

Figure 8.13. A period of sinusoid x ( t ) = 4 cos ( 2 π t ) (left-top), sampled sinusoid using T s   =   0.01 (right-top), quantized sinusoid using 4 levels (left-bottom), quantization error (right-bottom) 0 ε Δ = 2 .

%%

% Sampling, quantization and coding

%%

clear all; clf

% continuous--time signal

t=0:0.01:1; x=4*sin(2*pi*t);

% sampled signal

Ts=0.01; N=length(t); n=0:N−1;

xs=4*sin(2*pi*n*Ts);

% quantized signal

Q=2;    % quantization levels is 2Q

[d,y,e]=quantizer(x,Q);

% binary signal

z=coder(y,d)

The quantization of the sampled signal is implemented with our function quantizer which compares each of the samples x s (nT s ) with four levels and assigns to each the corresponding level. Notice the approximation of the values given by the quantized signal to the actual values of the signal. The difference between the original and the quantized signal, or the quantization error, ε(nT s ), is also computed and shown in Figure 8.13.

function [d,y,e]=quantizer(x,Q)

%Input: x, signal to be quantized at 2Q levels

%Outputs: y, quantized signal

%  e, quantization error

%USE [y,e]=midriser(x,Q)

%

N=length(x); d=max(abs(x))/Q;

for k=1:N,

if x(k)>=0,

  y(k)=floor(x(k)/d)*d;

else

    if x(k)==min(x),

    y(k)=(x(k)/abs(x(k)))*(floor(abs(x(k))/d)*d);

    else

    y(k)=(x(k)/abs(x(k)))*(floor(abs(x(k))/d)*d+d);

    end

end

if y(k)==2*d,

    y(k)=d;

end

end

e=x−y

The binary signal corresponding to the quantized signal is computed using our function coder which assigns the binary codes '10','11','00', and '01' to the 4 possible levels of the quantizer. The result is a sequence of 0s and 1s, each pair of digits sequentially corresponding to each of the samples of the quantized signal. The following is the function used to effect this coding.

function z1=coder(y,delta)

% Coder for 4-level quantizer

% input: y quantized signal

% output: z1 binary sequence

% USE z1=coder(y)

%

z1='00'; % starting code

N=length(y);

for n=1:N,

y(n)

if y(n)== delta

  z='01';

elseif y(n)==0

  z='00';

elseif y(n)== −delta

  z='11';

else

    z='10';

end

z1=[z1 z];

end

M=length(z1);

z1=z1(3:M) % get rid of starting code

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123948120000085

Discrete-Time Signals and Systems

Luis F. Chaparro , Aydin Akan , in Signals and Systems Using MATLAB (Third Edition), 2019

Abstract

The theory of discrete- and continuous-time signals and systems is similar, but there are significant differences. As functions of an integer variable, discrete-time signals are naturally discrete or obtained from analog signals by sampling. Periodicity coincides for both types of signals, but integer periods in discrete-time periodic signals impose new restrictions. Energy, power, and symmetry of continuous-time signals are conceptually the same as for discrete-time signals. Basic signals just like those for continuous-time signals are defined without mathematical complications. Extending linearity and time invariance to discrete-time systems, a convolution sum represent them. Significant differences with continuous-time systems is that the solution of difference equations can be recursively obtained, and that the convolution sum provides a class of non-recursive systems not present in the analog domain. Causality and BIBO stability are conceptually the same for both types of systems. Basic theory of two-dimensional signals and systems are introduced. The theory of one-dimensional signals and systems are easily extended to two dimensions, however, many of the one-dimensional properties are not valid in two dimensions. Simulations using MATLAB clarify the theoretical concepts.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978012814204200020X

Analysis of continuous and discrete time signals

Alvar M. Kabe , Brian H. Sako , in Structural Dynamics Fundamentals and Advanced Applications, 2020

5.2.2 Aliasing

In the conversion of a continuous time signal to digital form, aliasing is a critical consideration. If aliasing occurs, then the sampled time signal will not be representative of the actual physical phenomenon. To prevent aliasing we must either sample at least twice the highest frequency contained in the analog data, or we must remove the spectral content above the Nyquist frequency (one-half of the sampling rate) before sampling. This removal is accomplished by filtering, which we will be discussed in Section 5.4. However, for now it should be noted that in practice, because of limitations in filtering analog time histories, the sampling rate should be more than twice the Nyquist frequency. Accordingly, many data acquistion systems will ensure that the sampling rate is at least 2.5 times the Nyquist frequency.

It is worth noting that we can compute the apparent frequency that waveforms above the Nyquist frequency will "fold back to" when sampled. Consider a sinusoid, x ( t ) = cos ω t , where ω = 2 π f . Suppose we sample x ( t ) at a sampling rate Ω s = 2 π f s , where f s = 1 / T s . Furthermore, assume that the sampling rate is inadequate to prevent aliasing, i.e., f s < 2 f . Let

(5.2-9) f = m f s ± f 0

where f 0 f s / 2 and m is an integer. Then

(5.2-10) x s ( t n ) = cos ( ω t n ) = cos ( 2 π f · n T s ) = cos ( 2 π ( m f s ± f 0 ) · n T s ) = cos ( 2 π m n f s T s ± 2 π f 0 t n ) = cos ( 2 π m n ± 2 π f 0 t n ) = cos ( ± 2 π f 0 t n ) = cos ( 2 π f 0 t n )

Therefore, under sampling x ( t ) will produce an aliased signal with frequency equal to f 0 Hz.

In Fig. 5.2-1, we showed how a 1 Hz sinusoid ( f = 1 ), when sampled with a period of 0.8 sec ( f s = 1 / 0.8 = 1.25 ), resulted in a sampled signal that appeared to possess a period of 4 sec. Substituting into Eq. (5.2-9), with m = 1 , we obtain

(5.2-11) 1 = ( 1 ) 1.25 f 0

and f 0 = 0.25 , which corresponds to a period of 4 sec. Note that an infinite number of higher frequency waveforms could have folded back to yield the 0.25 Hz sampled signal. Indeed, if we only know that the 0.25 Hz waveform is an aliased signal, then for each combination of m and ± f 0 there would be many possible higher frequency waveforms that could be the source(s) of the aliased signal.

Fig. 5.2-4 shows an aliasing folding diagram. This tool is useful in determining how a signal with frequency higher than the Nyquist frequency ( f N y q u i s t = f s / 2 ) would fold to a lower frequency, f 0 , waveform. For example, suppose we sample a time signal at f s = 1000 samples per second (Hz), then f N y q u i s t = 1000 / 2 = 500 Hz. Now, suppose that the analog time signal contains waveforms with frequencies of f 1 = 600 Hz, f 2 = 1100 Hz, and f 3 = 1700 Hz. Since each of these is above the Nyquist frequency, they will be aliased and appear in the sampled time signal as lower frequency waveforms at 400 Hz, 100 Hz, and 300 Hz, respectively (see Eq. (5.2-9)). If the analog signal contained energy at any of these lower frequencies, the resulting sampled time signals will be the superposition of the actual low frequency content and the aliased waveforms. Note that the aliased waveforms can add constructively or destructively so that the sampled signal will appear to have greater or lower amplitudes, respectively, than in the analog signal.

Figure 5.2-4. Aliasing folding diagram for f N y q u i s t = f s / 2 = 500 Hz, f 1 = 600 Hz, f 2 = 1100 Hz, and f 3 = 1700 Hz.

The arrows in the aliasing folding diagram also indicate the apparent frequency rate of change. For example, consider a sinusoid with increasing frequency, f , that would be used during a swept-sine test. Suppose that our sampling rate is not adequate and imagine that we are visually monitoring the sampled sinusoidal input. As f increases to f N y q u i s t , the observed input will display an increasing frequency. As f passes f N y q u i s t , the observed frequency, f 0 , will for a brief moment appear stationary and then begin to decrease. The decrease in f 0 will continue until f approaches the sampling rate where it again appears stationary. Once f passes f s , the observed frequency will again increase. This apparent increase and decrease in the observed frequencies as f increases past multiples of f N y q u i s t and f s explains the changing rotation rates of a tire as a car speeds up in a video taken with a slower constant frame rate.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128216156000058

Signals, Systems, and Spectral Analysis

Ali Grami , in Introduction to Digital Communications, 2016

3.2.2 Continuous-Time and Discrete-Time Signals

A signal is said to be a continuous-time signal if it is defined for all time t, a real number. Continuous-time signals arise naturally when a physical signal, such as a light wave, is converted by a transducer, such as a photoelectric cell, into an electrical signal. A continuous-time signal can have zero value at certain instants of time or for some intervals of time.

A signal is said to be a discrete-time signal if it is defined only at discrete instants of time n. In other words, the independent variable on the horizontal axis has discrete values only (i.e., it takes its value in the set of integers). Note that it does not mean a discrete-time signal has zero value at nondiscrete (noninteger) instants of time, it simply implies we do not have (or probably we do not care to have) the values at noninteger instants of time. A discrete-time signal g(n) is often derived from a continuous-time signal g(t) by the sampling process. Figure 3.7 shows continuous-time and discrete-time signals.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978012407682200003X