Sanmati4 https://sanmati4.com Telecom And Networking Mon, 20 Aug 2018 10:44:57 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.8 https://i1.wp.com/sanmati4.com/wp-content/uploads/2017/09/LK_Blog.png?fit=32%2C32&ssl=1 Sanmati4 https://sanmati4.com 32 32 135429384 Multimedia Service Requirements https://sanmati4.com/multimedia-service-requirements/ https://sanmati4.com/multimedia-service-requirements/#respond Thu, 17 May 2018 04:48:37 +0000 https://sanmati4.com/?p=10749 Multimedia Service Requirements Recommendation F.701 provides guidelines for describing user requirements that are to be...

The post Multimedia Service Requirements appeared first on Sanmati4.

]]>
Multimedia Service Requirements

Recommendation F.701 provides guidelines for describing user requirements that are to be used as the basis for constructing new multimedia services. These guidelines are primarily intended to support the Multimedia service development methodology described in ITU-T Recommendation F.700. However, they can also be used as the basis for a structured dialogue between End-Users and Service Providers in order to arrive at a responsive service solution when applicable ITU-T service Recommendations are not yet available.

Multimedia service development methodology

A detailed methodology for developing Multimedia services is described in Recommendation F.700. Figure 1 provides an overview of this methodology and shows how end-user requirements are inserted into the service development process through the use of Application Scripts. The construction of these Scripts from End-User requirements is described in the remaining clauses of this Recommendation.

Multimedia service development

  • Application scripts

An application script is a document that describes the essential characteristics of an End-User application so as to facilitate identification and evaluation of the multimedia communication capabilities required to support it. The script, when properly validated, provides the baseline requirements for new multimedia services. The procedure for developing and validating application scripts is described in clause 4.

  • Communication capabilities

Communication capabilities are the fundamental sets of communication tasks, media components and integration mechanisms required to develop the complex spectrum of multimedia services. The procedure for translating the application script into the required communications capabilities is described in ITU-T Recommendation F.700. Procedures are also identified for launching the development of new communications capabilities when required to more fully support emerging user needs.

  • Middleware service elements

The middleware service elements contain all the control features and the processing functions associated with the service. They interact with the various communication capabilities in order to control them or to process the user information.

  • Multimedia service Recommendations

The translation of a particular application script into a description of the required multimedia service can be accomplished directly from the basic communication capabilities by utilizing the procedures specified in ITU-T Recommendation F.700. However, this process can be simplified in many cases by recognizing that a significant number of end-user applications utilize just a few combinations of multimedia communication means. The methodology for describing these generic service architectures in a series of general ITU-T service Recommendations is also described in ITU-T Recommendation F.700.

Application Scripts

  • Introduction

An application script describes the essential characteristics of an end-user application in a manner designed to facilitate identification and evaluation of the required multimedia communications support capabilities. This is accomplished by first describing the application from the end user’s point of view and then translating this description into a form more useful for technical evaluation.

The procedures for constructing an application script are described in 4.2 through 4.4. Ideally, an application selected for the scripting process should represent a broad grouping of individual end-user applications which have the same essential functional characteristics and for which there appears to be a need for the development of a new multimedia service, service arrangement or enhanced service capability.

Differences between specific applications within this broad grouping can be represented by the specific values assigned to a particular requirement attribute. Examples are shown in 4.4. The procedures for validating the results of the scripting process are described in clause 5.

  • Prose description

The prose description of an application provides a comprehensive statement of its scope and functional characteristics, together with the user’s expectations for the quality of service. This description is written in a language understandable to the end user, who need not be aware of the technical aspects of the underlying service or supporting communications networks.

The prose description may be augmented by an application scenario and a set of implementation notes which further describe the application, highlighting those aspects which might otherwise remain unclear.

  • Functional model of an application

The functional model provides a pictorial representation of the essential functional elements identified in the prose description. This representation is presented from the perspective of the application, rather than from the supporting service or network, and contains only those elements visible to the end user.

functional description

The principle characteristics to be depicted in the model are:

  • the shared information space in which the interaction takes place;
  • the functional role of the major participants;
  • the required supporting information resources;
  • the type and configuration of the various interactions; and
  • the need to interface associated application processes.

While there is no standard symbology for constructing the functional model, care should be taken to select a form of presentation that reflects the essential functional elements of the application in a clear and concise way.

  • Application matrix

An application matrix maps user requirements onto technical functionalities. The principles for developing attribute tables are the following:

  • Application matrices are intended to facilitate the mapping of user needs with technical functionalities in an easily understandable form.
  • Application matrices enable the evaluation of service functionalities in a systematic and compact form.
  • Application matrices facilitate assessing the importance of functionalities in regard to user needs.

The following are examples of user needs:

  • discussion of a jointly viewed document;
  • the need to move around;
  • the need to scrutinize fine details of a presented object.

The following are examples of functionalities the applications may require:

  • shared viewing space for images;
  • cordless communication access;
  • high-resolution image transfer.

The development of the matrix requires further study.

  • Summary

A script may include a prose description, an application scenario, implementation notes and an application matrix. (or several matrices for different environments or different times in the communication) . Some scripts may contain only part of those elements.

Harmonization of Application Scripts with other bodies

Application scripts can be developed by the ITU or by other standards organizations, industry fora, consortia, user groups or individual end users. An application script, before being used as the basis for launching a new service development or evaluation effort by the ITU-T, should be discussed with the end user community if possible or reasonable. This discussion should take place between the relevant study groups and those organizations that have been identified as most representative of relevant end-user interests, in accordance with ITU-T policies and procedures (see ITU-T Recommendation A.4).

The post Multimedia Service Requirements appeared first on Sanmati4.

]]>
https://sanmati4.com/multimedia-service-requirements/feed/ 0 10749
Videotelephony Teleservice for ISDN https://sanmati4.com/videotelephony-teleservice-for-isdn/ https://sanmati4.com/videotelephony-teleservice-for-isdn/#respond Mon, 14 May 2018 03:00:11 +0000 https://sanmati4.com/?p=10735 Videotelephony Teleservice for ISDN The videotelephony teleservice for ISDN is a symmetrical, bi-directional, real-time, audiovisual...

The post Videotelephony Teleservice for ISDN appeared first on Sanmati4.

]]>
Videotelephony Teleservice for ISDN

The videotelephony teleservice for ISDN is a symmetrical, bi-directional, real-time, audiovisual teleservice in which speech and moving pictures are interchanged by means of one or two B-channels, using 64 kbit/s circuit-mode connections in the ISDN. The picture information transmitted is sufficient for adequate representation of fluid movements of a person displayed in head and shoulders view.

Description

1 General description

The videotelephony teleservice is defined as a fully standardized ISDN teleservice following the principles are given in Recommendation I.210. Two cases can be identified for the videotelephony teleservice:

  • Case I: Videotelephone based on using one circuit-mode 64 kbit/s connection.
  • Case II: Videotelephone based on using two circuit-mode 64 kbit/s connections.

For the case I the 64 kbit/s connection carries both speech and video information; for case II the first connection either carries speech or both speech and some video information and the second connection carries video information.

The basic videotelephony teleservice is characterized by the transmission of moving pictures displayed

continuously in colour, simultaneously with the speech of the persons involved in the call (generally two in the case of a point-to-point connection).

The speech quality of this teleservice must be at least as good as that applicable to the telephony teleservice in the 64 kbit/s ISDN based on bandwidths of 3.1 kHz or 7 kHz respectively.

The videotelephony teleservice shall allow the communication between:

  • two users (e.g. terminals) in a point-to-point configuration via the ISDN over one or two circuit mode 64 kbit/s connection(s);
  • three or more users in a multipoint configuration as invoked by some supplementary services.

Videotelephone terminals must be capable of supporting the telephony teleservice.

An essential feature of the service is that, besides videotelephony, it also provides to the user the possibility to communicate with other ISDN telephone or videotelephone terminals by using only the speech communication facility.

It shall be possible to use videotelephone terminals to communicate with 3.1 kHz telephone terminals connected to the public switched telephone network (PSTN).

2 Specific terminology

Fall-back: Procedure performed either by the network or by the calling videotelephone terminals to establish calls to 3.1 kHz telephone terminals.

  • Call 1: The first call invoked in the videotelephony teleservice. It identifies the first 64 kbit/s connection between the subscribers. The call is invoked for all the service cases.
  • Call 2: The second call invoked in the videotelephony teleservice. It identifies the second 64 kbit/s connection between the two subscribers. The call is invoked for case II only (2 ´ 64 kbit/s).
  • Retention timer: This timer specifies the amount of time that the network retains the call information of the original call upon encountering busy or being released. This timer is a network provider option. The value of this timer is greater than 15 seconds.
  • Videotelephone terminal: A terminal that supports the videotelephony teleservice.
  • 1 kHz telephone terminal: A terminal that supports the telephony 3.1 kHz teleservice.
  • 7 kHz telephone terminal: A terminal that supports the telephony 7 kHz teleservice.

Intercommunication and interworking considerations

1 Intercommunication/interworking with other terminals

Intercommunication with 3.1 kHz and 7 kHz ISDN terminals or interworking with PSTN shall be offered.

1.1 General principles

The videotelephony service shall include voice encoding according to Recommendation G.711. It may include additional speech encoding as an optional feature. The following fundamental requirements shall be fulfilled:

  • The user of a videotelephone terminal shall be able to establish calls to 3.1 kHz and 7 kHz telephone terminals (if the 7 kHz capability is supported) connected to the ISDN and to telephone terminals connected to the PSTN. Optionally, it should be able to reach other ISDN audiovisual terminals.
  • A videotelephone terminal shall be able to accept calls from 3.1 kHz and 7 kHz (if 7 kHz capability is supported) telephone terminals connected to the ISDN and from 3.1 kHz telephone terminals connected to the PSTN. Optionally, it should be able to accept calls from other ISDN audiovisual terminals.

As an option, videotelephone terminals may be pre-programmed to receive incoming videotelephone calls only. This latter functions may be requested by users possessing, e.g. both a videotelephone terminal and a 3.1 kHz telephone terminal connected to the same access arrangement.

1.2 Fall back procedure

1.2.1 Fall back in the destination network

Fall-back to 3.1 kHz telephony shall be an inherent feature of videotelephony teleservice and shall be provided as a default procedure.

The user shall be offered the possibility of indicating whether be requires interworking/fall-back to the 3.1 kHz telephony teleservice. A request for the videotelephony teleservice without fall-back (if indicated by the calling terminal) shall be possible. The following procedure shall apply:

  • If the calling user has indicated that fall-back is allowed, the network may offer the call to the called user at all videotelephone and 3.1 kHz telephone terminals if possible. The called user can accept the call either as a videotelephone or a 3.1 kHz telephone call at any terminal where the call is offered.

Note – The called terminals may recognize the fall-back situation and indicate it to the user.

  • The calling user shall be informed of the resultant telecommunication service, i.e. the videotelephony or 3.1 kHz telephony teleservice.
  • If no terminal accepts the call, this shall be indicated to the calling user.
  • If a busy condition is met at the terminals, supplementary services, e.g. completion of calls to the busy subscriber shall apply.

Note – Echo cancellation will be disabled for videotelephone calls. If fall-back occurs, there is no current signalling mechanism for re-enabling the echo cancellers.

  • When fall-back is not implemented by the network (possible short-term situation), fall-back may be performed end-to-end by the calling videotelephone terminal by originating a 3.1 kHz telephone call.

1.2.2 Fall-back when the ISDN does not offer the videotelephone teleservice

If the calling user has indicated that fall-back is allowed but the destination network does not support the

videotelephone capabilities the calling user shall receive an indication that fall-back has occurred and an indication of the resultant telecommunication service. The called user shall be offered the incoming call as a telephony 3.1 kHz call.

2 Interworking with private ISDNs

If the called user is on private ISDN, the fall-back procedures will be performed by the private ISDN. The result of call presentation (videotelephony or 3.1 kHz telephony) within the private ISDN shall be indicated to the public ISDN.

Attributes/values

  • Application of the attribute method

Depending on the case that applies the videotelephony teleservice description is based on invocation of one or two calls: call 1 and call 2 described according to the attribute method.

Low layer attributes

  • Call 1

Transfer mode: circuit.

Transfer rate: 64 kbit/s.

Transfer capability: 7 kHz.

Note – Before 7 kHz audio bearer service is available, as an interim solution videotelephone should use “unrestricted digital information” as the transfer capability when calling other videotelephones.

Structure: 8 kHz integrity.

Establishment of communication: demand.

Symmetry: bidirectional symmetric.

Configuration of communication: point-to-point, multipoint.

Note that in the case where fall-back to the 3.1 kHz telephony teleservice occurs, values of 3.1 kHz telephony teleservice bearer capability apply. Also if optionally a 3.1 kHz telephone call is established first, attributes of 3.1 kHz telephony teleservice bearer capability apply (telephone call instead of call 1).

  • Call 2

Transfer mode: circuit.

Transfer rate: 64 kbit/s.

Transfer capability: unrestricted digital information.

Structure: 8 kHz integrity.

Establishment of communication: demand.

Symmetry: bidirectional symmetric.

Configuration of communication: point-to-point, multipoint.

Access attributes

  • Call 1

Acces channel and rate: D(16) or D(64) for signalling, B(64) for user information.

Signalling access protocol, layer 1: Recommendations I.430/I.431.

Signalling access protocol, layer 2: Recommendation Q.921.

Signalling access protocol, layer 3: Recommendation Q.931.

Information access protocol, layer 1: Recommendations H.221, G.711, G.722 (option), H.242,

                                                                    H.261, AV.254.

Information access protocol, layer 2.

Information access protocol, layer 3.

Note that in the case where fall-back to the 3.1 kHz telephony teleservice occurs or if optionally a telephone call is established first, attribute 9.4 has the following values: Recommendations I.430/I.431, G.711.

  • Call 2

Acces channel and rate: D(16/64) for signalling, B(64) for user information.

Signalling access protocol, layer 1: Recommendations I.430/I.431.

Signalling access protocol, layer 2: Recommendation Q.921.

Signalling access protocol, layer 3: Recommendation Q.931.

Information access protocol, layer 1: Recommendations H.221, H.242, H.261, AV.254.

Information access protocol, layer 2.

Information access protocol, layer 3.

High layer attributes

  • Call 1

Type of user information: Speech (telephony), video, data, audiovisual (information).

Layer 4 protocol functions: Recommendation H.221.

Layer 5 protocol functions: Recommendation H.242.

Layer 6 protocol functions: Recommendations G.722 (option), G.711, H.261.

Layer 7 protocol functions.

Note that in the case where fall-back to 3.1 kHz telephony teleservice occurs or, if optionally a 3.1 kHz telephone call is established first, the value of attribute 10) is “speech” and the value of attribute 13) is “Recommendation G.711”.

  • Call 2

Type of user information: video.

Layer 4 protocol functions: Recommendation H.221.

Layer 5 protocol functions: Recommendation H.242.

Layer 6 protocol functions: Recommendation H.261.

Layer 7 protocol functions.

 

The post Videotelephony Teleservice for ISDN appeared first on Sanmati4.

]]>
https://sanmati4.com/videotelephony-teleservice-for-isdn/feed/ 0 10735
IP videotelephony service https://sanmati4.com/ip-videotelephony-service/ https://sanmati4.com/ip-videotelephony-service/#respond Sat, 12 May 2018 06:44:48 +0000 https://sanmati4.com/?p=10686 IP videotelephony service The IP videotelephony service provides real-time end-to-end bidirectional communications between two subscribers...

The post IP videotelephony service appeared first on Sanmati4.

]]>
IP videotelephony service

The IP videotelephony service provides real-time end-to-end bidirectional communications between two subscribers in different locations on the IP network by means of voice, video, real-time text and other forms of multimedia data and/or control facility.

The way to place a call on IP videotelephony services is the same as that for conventional telephony services in the sense that the subscriber dials a number, or enters another type of identifier, to initiate a call. In addition to videotelephony calls, the subscriber can also use the videotelephony services in other applications.

A subscriber of IP videotelephony services can be located in any place covered by IP networks, e.g., office buildings, meeting rooms, hotels, residences, telephone booths on the street or even on board a transport vehicle.

There are two types of IP videotelephony calls:

  • Point-to-point calls;
  • Multiparty calls (utilizing devices for audio, video and text mixing, e.g., the Multipoint Control Unit (MCU)).

There are two major types of IP videotelephony terminals:

  • Videotelephony sets;
  • Softphones running on computers.

Other types of videotelephony terminals, such as PSTN videophones, ISDN videophones or even conventional phones and mobile phones, can communicate with the IP videophones. These terminals may have limited capabilities regarding sending or receiving various types of information in videotelephony calls, but they can at least intercommunicate with IP videophones in a voice-only mode.

Functional model and service profile

Functional model

Functional model of IP videotelephony services

The functional model of IP videotelephony services is shown in Figure 1. IP videotelephony terminals exchange voice, video, real-time text and multimedia data in a point-to-point or multipoint way over IP networks. They can also intercommunicate with other videotelephony or conventional telephony terminals via interworking units. The call control unit processes call signalling and controls sessions, and the Authentication, Authorization and Accounting (AAA) unit performs the function of the user, authentication, authorization and accounting.

Service model

Service model: the functional view

There are two models for IP videotelephony services from the functional view. Other models may exist or can be developed. However, they are a subject beyond the scope of this Recommendation and are thus for further study.

Basic videotelephony services

They refer to those IP videotelephony services that support the mandatory basic features. Only the voice, video and real-time text of conversational parties are transmitted between them.

Enhanced videotelephony services

They refer to those IP videotelephony services that support the enhanced optional features (i.e., other forms of multimedia data and/or remote control function). In addition to voice, video and real-time text, multimedia data like still pictures, pre-recorded audio/video clips, text messages, and collaboration-related data such as white-boarding information can be transmitted between videophones. Other optional features should include the convening of a conference and control of a conference, support for far-end camera control and other remote control.

Service model: the usage view

From the view of use environment, there are two types of IP videotelephony services.

Residential videotelephony services

For residential videotelephony service users, the following applications should be supported:

  • Face-to-face conversation with audio, video, and real-time text;
  • Face-to-face conversation with simultaneous transfer of pictographic data such as picture, pre-recorded video clips and files of other kinds;
  • Remote video surveillance for home security inspection and unmanned babysitting for kids, etc;
  • Emergency calls in audio, video and text.
Business videotelephony services

For business service users, the following applications should be supported:

  • Face-to-face conversation with audio, video, and real-time text;
  • Face-to-face conversation with simultaneous transfer of pictographical data such as images, documents and files of other kinds;
  • Remote video surveillance;
  • Remote consultation;
  • Remote diagnosis in telemedicine;
  • Participation in videoconferencing;
  • Emergency calls in audio, video and text.

 Service profiles

While all videotelephony services have the common capability for transmission of audio, video and real-time text, they can be divided into different types of profiles according to the quality level of audio and video, and other multimedia data exchanged.

An IP videotelephony service may be offered with two levels of audio quality, three levels of video quality, one level of real-time text, five types of exchanged data and two types of control facility. The basic audio quality is level A0, equivalent to 3.4 kHz PCM telephony; the enhanced audio quality is level A1, equivalent to 7 kHz or 14 kHz wideband audio. The three levels of video are level V1 for QCIF video, level V2 for CIF video and level V3 for SDTV video. The level of real-time text is T2, good conversational text. The five types of exchanged multimedia data are still pictures, video clips, text messages, file transfer and joint editing. The two types of control facility are remote control and conference conductor.

Taking into account the factors mentioned above, the following profiles of videotelephony service are defined. The service profile descriptions are not intended to impose a particular and detailed way of offering the services, but to illustrate the approach to the profile definition.

  • Profile a: Basic videotelephony service: basic audio, QCIF or CIF video, real-time text, optional multimedia data or control facility;
  • Profile b: Enhanced Basic videotelephony service: wideband audio, CIF video, real-time text, optional multimedia data or control facility;
  • Profile c: Enhanced videotelephony service: wideband audio, CIF or SDTV video, real-time text, multimedia data and/or control facility.

These profiles ensure at least a minimum level of communication. Conformance to a profile ensures intercommunication with other terminals of the same profile. A terminal or a service function unit may conform to one or several profile(s), and may have capabilities beyond those embodied in the profile(s).

Service scenarios

This clause describes typical service scenarios to illustrate the videotelephony service and to derive its technical requirements.

  1. Business videotelephony service scenarios

    1. Business call
    2. Business trip
    3. Customer service
    4. Call between a deaf and blind person and a service centre
  2. Residential videotelephony service scenarios

    1. Family call
    2. Monitoring

IP videotelephony service requirements

User requirements

Basic requirements:

  • Ability to exchange real-time video, audio and real-time text;
  • Ability to select audiovisual mode or voice-only mode;
  • Ability to make videotelephony calls at any place covered by IP networks;
  • Ability to make videotelephony calls in handset mode and handsfree mode;
  • Ability to make videotelephony calls by people with hearing or speech disabilities.

Enhanced requirements:

  • Ability to exchange multimedia data including still images, live and pre-recorded video clips, and collaboration data such as whiteboard;
  • Ability to implement remote control;
  • Ability to join and conduct a videoconference.
 Application requirements

Basic requirements:

  • Voice and video switching processing;
  • Allow for various access means, such as xDSL, Ethernet, WLAN, GSM and 3G etc., and videotelephony service provider should support at least one of them;
  • Support for interworking between different videotelephony systems or networks through gateways;
  • Support for subscriber management and numbering. Use of E.164 numbering plan is mandatory;
  • Support for PSTN-like dialling modes. A keypad should be implemented on the videophone;
  • Support for audio arrangement of handset function and handsfree function;
  • Support for entry and display of real-time text. The specific method for text entry (e.g., keypad, integral or detachable keyboard, touch screen, verbal recognition) is beyond the scope of this Recommendation. The specific method for text display (e.g., video screen, Braille, verbal) is beyond the scope of this Recommendation.

Enhanced requirements:

  • Support for dynamic creation and termination of video streams;
  • Support for fallback from audiovisual mode to voice-only mode;
  • Support for upgrading from voice-only mode to audiovisual mode;
  • Support for real-time multimedia data exchanging, such as still pictures, live and prerecorded video clips, text messages, and collaboration data;
  • Support for other types of dialling modes, such as use of aliases.
 Security requirements

The security of IP videotelephony calls should be guaranteed. There are three levels of security:

  • Subscriber authentication and authorization;
  • Call security;
  • Security of media streams.
 Authentication and accounting requirements

The subscriber authentication of IP videotelephony services is used to ensure that only legal subscribers can have access to IP videotelephony services and accurate accounting should be implemented for the IP videotelephony calls made by subscribers.

Interworking and intercommunication requirements

Three types of interworking or intercommunication are related to IP videotelephony service:

  • Interworking and intercommunication between terminals with different capability sets;
  • Interworking and intercommunication between terminals in different networks (PSTN, ISDN, 3G etc.);
  • Interworking and intercommunication between different IP videotelephony systems. Transcoding or bit-rate conversion in the interworking unit may be needed so that each terminal receives and transmits the signals it is able to handle.
Terminals with different capabilities

Terminals may have different characteristics and capabilities, thus conforming to different profiles that the videotelephony service provider may offer. When they intercommunicate with each other, a common mode of the profiles will be used. This will adapt the service quality and functionalities to those of the terminal with the lowest quality level for each media component; however, communication is always possible because all terminals conform to the common basic profile.

 Terminals in different networks

IP videotelephony service needs to intercommunicate and interwork with the videotelephony service in other (non-IP) networks. In addition, the interworking between IP videotelephony calls and conventional telephony calls should be guaranteed.

  • Intercommunication and interworking between IP videotelephony terminal and PSTN/ISDN/3G, etc. videotelephony terminal;
  • Intercommunication and interworking between IP videotelephony terminal and PSTN/ISDN/mobile phone.

The requirements of this type of intercommunication include:

  • Audio transcoding or bit-rate conversion;
  • Video transcoding or bit-rate conversion;
  • Real-time text transcoding;
  • Data transcoding or bit-rate conversion;
  • Call control signalling conversion.

 Different IP videotelephony systems

There may be many IP videotelephony systems, such as H.323-based and SIP-based videotelephony systems. Intercommunication is needed if the terminals are located in different service systems. The requirements of this type of intercommunication include:

  • Audio/video/text/data transcoding or bit-rate conversion;
  • Authentication between different systems;
  • Accounting between different systems;
  • Call control signalling conversion;
  • Subscriber resource sharing and security.
 QoS requirements

The QoS of IP videotelephony calls should be guaranteed. Since the major media elements of videotelephony calls are voice, video and text, it is necessary to guarantee clear voice, clear headand-shoulder image, continuous and smooth video of certain motion levels and text with good performance. In enhanced videotelephony services, the quality of multimedia data should also be guaranteed.

To guarantee the QoS of IP videotelephony services, IP networks should provide QoS guarantee to support the bidirectional real-time service.

 Audio quality

IP videotelephony should support basic audio (3.4 kHz) and wideband audio (7 kHz or 14 kHz).

IP videotelephony should be capable of performing acoustic echo-cancellation.

IP videotelephony should have error resilience mechanisms to recover from packet loss.

For a videotelephone with the audio arrangement of handset or handsfree function, the sensitivity and loudness rating should be guaranteed.

 Video quality

IP videotelephony should be able to provide a smooth video depending on the application.

IP videotelephony should be able to provide reliable video colours.

IP videotelephony should have error resilience mechanisms to recover from packet loss.

Text quality

IP videotelephony should support good text quality for real-time conversation. The presentation should be smooth, covering any jerkiness caused by transmission in blocks. The delay between each character entry and its display should be low so that the experience of a direct conversation is maintained. The reliability should be good, so that transmission errors are much rarer than typing errors, and indicated to the users.

 Lip synchronization

IP videotelephony should be capable of performing lip-synchronization so that there is no humanly perceptible asynchronism between audio and video.

 Overall delay

The overall delay comprises two parts: network transmission delay and delay due to processing on IP videophone terminals. The latter is caused by the codec on terminals to perform encoding and decoding.

The overall delay for IP videotelephony should be within specified limits since any delay greater than this threshold will cause an unacceptable degradation in QoS.

 Network transmission quality

In order to provide videotelephony services over an IP network, the IP network should be able to provide end-to-end QoS guarantee. The required QoS has different aspects such as low delays, low jitters, and low packet loss. The required network transmission quality for the IP videotelephony service should be defined according to ITU-T Rec. Y.1541.

 

The post IP videotelephony service appeared first on Sanmati4.

]]>
https://sanmati4.com/ip-videotelephony-service/feed/ 0 10686
Videophone Service in Public Switched Telephone Network https://sanmati4.com/videophone-service-public-switched-telephone-network/ https://sanmati4.com/videophone-service-public-switched-telephone-network/#respond Thu, 10 May 2018 09:24:00 +0000 https://sanmati4.com/?p=10669 Videophone Service in Public Switched Telephone Network (PSTN)  Introduction Recommendation F.723 contains the description and...

The post Videophone Service in Public Switched Telephone Network appeared first on Sanmati4.

]]>
Videophone Service in Public Switched Telephone Network (PSTN)

 Introduction

Recommendation F.723 contains the description and network specific service requirements for videophone service in Public Switched Telephone Network (PSTN). The substance of this Recommendation complements the main body of the Supplement to Recommendation F.720, which deals with network independent service requirements for respective Low Bit Rate (LBR) videophone services provided in networks such as PSTN and digital mobile telecommunication networks across Low Bit Rate (LBR) channels. The difference between the service requirements in these two network domains stems from variations in access rates, mobility, the robustness of digital wireless transmission and different terminal environments. In addition to network-specific requirements, the network independent requirements for LBR videophone services and general requirements for all videophone services, included in Recommendation F.720, apply for the service as well.

Due to bandwidth and technical restrictions, the quality of service is limited and may be inadequate for many applications, in particular in the professional domain. Hence, it is essential that users of the service can utilize the reduced network capabilities as efficiently as possible with a flexibility in the channel allocation between voice, video, image and data.

This service can be used on a stand-alone basis or as part of a multimedia application. In the latter case, the same requirements apply.

General description

The general service description characteristics to all LBR videophone services is included in the Supplement of the F.720 Recommendation.

The service offers a real-time conversational two-way audiovisual end-to-end communication, comprising video, audio and optional2) in-band data transfer capabilities. As a rule, the audiovisual information is transferred along a single PSTN connection, based on LBR data channels. The aggregation of two PSTN connections for achieving increased overall capacity and QoS may be offered as an option (further studies are required).

For the video, the service should support the coding scheme defined by Recommendation H.263 and a spatial resolution conforming to QCIF and the terminal requirements covered by Recommendation H.324.

Basic functionalities

The basic functionalities, characteristic of videophone services in general, as well as those of the LBR videophone services, have to be supported.

The basic PSTN speech communication facility, i.e. that of an analogue telephone, must be included in the terminal so that the user shall be able to use the terminal as a normal telephone as well.

Concerning the fall-back, a slow repetition rate video mode, manually controllable from the receiving end, must be supported.

Dynamic channel allocation shall be provided as a mandatory system capability. Dynamic channel allocation is executed by means of a framing, synchronization and terminal negotiation mechanism in conformance to Recommendations H.245 and H.324 and other relevant ITU-T Recommendations.

Possible applications

In the consumer/residential segment the envisaged applications are real-time human-to-human interaction, based on the head-and-shoulders view, and remote surveillance and event monitoring e.g. for babysitting, security, as well as other non-conversational applications.

In business/institutional segment the foreseen applications are remote expert consultation, requiring audiovisual support, remote surveillance and recognition, remote troubleshooting, remote inspection and accessing videoconferences.

The above applications may be offered on a stand-alone basis or as part of a value-added multimedia application requiring extended terminal capabilities.

From the service point of view, the basic videophone terminal has to support QCIF only for the motion video. Some terminals, i.e. enhanced videophones may support also SQCIF3) format or at least they are capable of receiving SQCIF video frames.

Attributes and values

A.1 Low layer attributes

 

Attributes

Values

1

Transfer mode

Circuit

2

Transfer rate

maximum 28.8 kbit/s (and beyond), under degraded network conditions may be less

3

Transfer capability

3.1 kHz audio for Rec. V.34 (video telephony)

3.1 kHz speech for telephone mode (analogue telephony)

4

Structure

N/A

5

Establishment of communication

on demand

6

Symmetry

bidirectional symmetric

7

Configuration of call

point-to-point

A.2 Access attributes

 

Attributes

Values

8

Access channel and rate

in videophone mode: 5.3/6.4 kbit/s speech and 23.5/22.4 kbit/s video, or optionally in data mode 28.8 kbit/s or in speech and data mode: 5.3/6.4 kbit/s speech and  23.5/22.4 kbit/s data

9.1

Signalling access protocol, layer 1

Rec. V.8/V.8 bis

9.2

Signalling access protocol, layer 2

Rec. V.8/V.8 bis

9.3

Signalling access protocol, layer 3

 

9.4

Information access protocol, layer 1

Rec. H.223

9.5

Information access protocol, layer 2

Rec. H.245

9.6

Information access protocol, layer 3

 

A.3 High layer attributes

 

Attributes

Values

10

Type of user information

audio and video and/or data or plain data

11

Layer 4 protocol functions

 

12

Layer 5 protocol functions

 

13

Layer 6 protocol functions

Rec. G.723.1 for audio

Rec. H.263 for video

T.120-Series for data

14

Layer 7 protocol functions

 

A.4 General attributes

 

Attributes

Values

15

Supplementary services provided

for further study

16

Quality of service

audio: 3.1 kHz telephony, toll-quality speech

video

synchronization of audio and video: no subjectively discernible delay between speech and video or minimal audio delay (inserted speech delay disabled)

data: for further study

17

Intercommunication/interworking

possibilities

with LBR videophone services in mobile networks

with ISDN videophone service: to the extent feasible with gateways in the network

with other audiovisual services (on telephony only)

with telephony

with other services: for further study

18

Operational and commercial aspects

for further study

 

The post Videophone Service in Public Switched Telephone Network appeared first on Sanmati4.

]]>
https://sanmati4.com/videophone-service-public-switched-telephone-network/feed/ 0 10669
Multimedia Conference Services https://sanmati4.com/multimedia-conference-services/ https://sanmati4.com/multimedia-conference-services/#respond Wed, 09 May 2018 14:41:56 +0000 https://sanmati4.com/?p=10609 Multimedia Conference Services The multimedia conference services provide real-time transmission of voice together with motion...

The post Multimedia Conference Services appeared first on Sanmati4.

]]>
Multimedia Conference Services

The multimedia conference services provide real-time transmission of voice together with motion video and/or various types of multimedia information between groups of users in two or more locations. The documents exchanged may contain all information types. When moving pictures are present, their quality must be at least sufficient for the adequate representation of the fluid movements of a small group of participants. The media components used are described in Annex A/F.700 on audiovisual/multimedia services. Media component audio (A.1/F.700) is mandatory, and one or more of the media components video (A.2/F.700), text (A.3/F.700), graphics (A.4/F.700) and still pictures (A.5/F.700) should be present.

Description

  1. General description

A multimedia conference service provides real-time communication between several users in different locations, combining a good audio facility with motion video of participants and/or transmission of multimedia information. The service is applicable to companies’ private conference rooms as well as to public-access conference rooms for hire on an occasional basis. It is applicable to a variety of types of multimedia conference terminals such as:

  • dedicated studios equipped for multimedia conferences;
  • multipurpose meeting rooms used only part-time for teleconferencing;
  • portable or roll-about equipment which can be moved from one room to another to provide temporary service;
  • terminal equipment for individual participants, e.g. microcomputer based terminals.

The service is bi-directional via telecommunication networks, and provides for interconnection of two or more multimedia conference terminals on an equal basis. Other types of terminals may be added to the conference, such as videotelephones or even plain telephones; although they will usually have some limitations on the capability to send and receive all the different types of information used in a multimedia conference call, they will at least be able to exchange speech allowing their users to take part in the discussion; this is described in clause 8 on intercommunication.

When the conference includes more than two terminals, a Multipoint Conference Unit (MCU) is usually required. All locations are connected individually to an MCU which elaborates a selection or the appropriate combination of these signals for each of the locations, and manages the signalling and the optional channels.

Multimedia conference services are essentially built around the communication task conferencing described in B.2/F.700. Other communication tasks (receiving and sending) are optional.

  1. Functional model

In a multimedia conference, two or more terminals exchange multimedia information through an interconnection system, under the control of a control unit (Figure 1). The interconnection system includes equipment for switching and/or combining the information from the different terminals and one or several networks.

Multimedia Conference Services - Functional Model

NOTE – The physical boundaries of equipment are independent of the functional boundaries shown in the figure. For instance, an MCU usually includes a control unit and other units, dedicated to the various media components, which perform switching and/or combining actions and functionally belong to the interconnection system.

In the special case where there are only two terminals, they are connected point-to-point and a control unit is not needed.

  1. Configuration

The configuration may be point-to-point between two multimedia conference rooms, or multipoint-to-multipoint between several. In the latter case, the terminals are usually connected through one or more Multipoint Conference Units (MCUs). The MCUs have to fulfil three functions:

  1. managing the call, setting up and closing the connections;
  2. managing the conference, through control and indication signals exchanged with the terminals;
  3. handling the signals received and sent on each connection, switching, distributing, multiplexing and when necessary adapting and combining them as appropriate.
Types of configurations

The multipoint configurations can be subdivided into:

  • multichannel multipoint;
  • shared channel multipoint;
  • switched multipoint.

Including the point-to-point configuration, the following four types of configurations may thus be set up (see Figure 2).

Types of configurations

 

Case a) Point-to-point configuration

Two conference rooms are directly connected (without any MCU). Conference management is by bilateral negotiation between the terminals.

Case b) Multichannel multipoint configuration

Three or more multimedia conference rooms are connected two-by-two so that each of them receives the signals from all the others, and may use them in various ways. For instance, the sounds may be mixed or directed to separate loudspeakers. If a video is present, each terminal permanently receives the images from all other locations and displays them simultaneously on separate screens or on different windows of a unique screen. Various schemes may be used for exchanging documents. A limitation to the number of participants comes from the number of channels available in each location and the number of equipment for presenting the information to the users (or the number of inputs to these equipment), e.g. the number of images that can be displayed simultaneously by the terminal equipment. An MCU is not mandatory, but it may be used for managing the conference.

Case c) Shared channel multipoint

This configuration requires a multipoint conference unit. This MCU receives signals from every terminal and combines them to elaborate the signals sent out to each terminal. This may be done by multiplexing them in a higher bit-rate channel (e.g. four or five H0 channels into one H1 channel). It may also be achieved by mixing the sound signals, putting the various pictures into different windows (when applicable) and broadcasting the data channels if they are present. The MCU also processes the control and indication signals.

If the number of available multiplexed channels or the number of available windows for the video is smaller than the number of terminals minus one, then a selection must be made inside the MCU. This applies mainly to the pictures in a videoconference. The pictures may be chosen by each user, or by the Chairman if there is one, or they may be those of the latest speakers or any combination of these. More details on this case are for further study.

Case d) Switched multipoint configuration

This configuration requires at least one multipoint conference unit. This MCU receives signals from every terminal; it selects according to predetermined rules or to specific commands the signals sent to each terminal; it handles the signalling, commands and indications, forwards them when necessary and returns the appropriate answers; it manages the optional channels and broadcasts the signals received on these channels.

Most often the switching only applies to the video signals, because data may be multiplexed into a common packet-switched data channel and the sound signals are usually added in the MCU so that each terminal receives the sounds from all other terminals excluding its own; however, this addition process may be restricted to a few terminals in order to limit the noise level or the unwanted disturbances if the number of participants is large. Alternately, the sound may also be switched together with the image.

Several MCUs may be required for some configurations either for technical or for economical reasons. In this case, each terminal is connected to one of the MCUs; the MCUs are interconnected and handle the signals to and from other MCUs similarly as to and from an ordinary terminal.

From the network point of view, the connections that have to be established are as follows:

  • case a) point-to-point;
  • case b) several point-to-point connections linking all locations two-by-two;
  • case c) several point-to-point connections between each location and the MCU, which may be non-symmetrical if the network supports this type of connection;
  • case d) several point-to-point connections between each location and the MCU, and possibly between MCUs if several are used; a single or multiple point-to-multipoint connection(s) may also be used in networks that support them.

The four types of configurations are shown in Figure 2. Cases c) and d) are represented by the same diagram, but the functions of the MCU and the types of connections are different.

When several MCUs are involved, this should be transparent to the users, except through the increased transmission delays and the possible limitations due to the capability of the inter-MCU connections. In the following subclauses, the term MCU will be used indifferently for a unique MCU or for several interconnected MCUs, unless otherwise stated.

Figure 3 shows several functionally equivalent configurations where the MCU function is either concentrated in one location or distributed between several.

functionally equivalent configurations

  1. Roles of the participants

A participant may be allocated two types of particular privileges that allow his terminal to issue special commands to which the MCU will respond by the appropriate actions. These give him respectively the roles of the controller and the Chairman. The controller manages the call, and may perform the following:

  • accept new participants;
  • disconnect a participant from the call;
  • split the conference if this possibility is offered;
  • terminate the call;
  • ask for a continuation of the call beyond the reserved time.

The Chairman manages the conference, and may perform the following:

  • give the floor to a participant;
  • mute another terminal;
  • organize or allow a private talk;
  • possibly allocate the right to transmit other information, although this facility control function may also be separately assigned.

These two roles may be assigned to the same terminal or to different terminals. At the beginning of the call, the controller is usually the convener, but he may transfer this role to another terminal (see Note).

Other participants may ask for the floor or for the authorization to transmit data, and the MCU forwards these requests to the chair-terminal.

NOTE – Some MCUs may not be able to separate the two roles of controller and Chairman. In that case, they should then be able to allocate the same joint controller-Chairman function to two different terminals (e.g. by issuing two identical tokens), and it will be left to both users to each performs only the functions pertaining to his role. Indeed this may be a preferred solution because it allows the Chairman to leave to the controller some of the functions (for instance data management) that he may not be willing to perform; the Chairman may then have an easy terminal, to use with only very few controls.

  1. Terminal aspects

In order to perform the basic functions necessary for multimedia conference services, the terminal equipment must include the following units necessary for audio communication:

  • one (or more) microphone(s);
  • one (or more) loudspeaker(s);
  • an audio codec;
  • audio related controls;
  • some means for identifying the speaker.

The terminal must also include a control module to which these units will be connected, and a network interface unit. The other types of information require specific equipment detailed below. The terminal should include the equipment(s) for at least one media component besides audio.

The equipment for handling multimedia documents includes one or more of the following functional units:

  • a microcomputer with a screen and optionally a printer;
  • a still picture equipment with a camera or scanner, a screen and/or a printer;
  • a telewriting equipment;
  • a facsimile equipment.

The basic equipment for video includes:

  • one (or more) camera(s);
  • one (or more) screen(s);
  • a video codec.

When a video is present, means must be provided for displaying the outgoing picture, either permanently or by substituting it on the screen to the incoming picture. If sound and/or image sources are locally switched, then an indication must be given of which one is sent out.

NOTE – Testing of the outgoing picture: it should be possible for the user to put an off-line terminal into a self-test procedure, which includes the codec, in order to test and control the outgoing picture.

Possible enhancements to the equipment in each location are for example:

  • voice switched or manually switched multiple microphones;
  • multiple cameras performing some of the following functions:
    • overall view of the room;
    • partial views of the assembly;
    • views of individual participants;
  • the pictures from these cameras may be switched or combined in different windows;
  • additional dedicated cameras;
  • multiple screens, for instance for displaying side-by-side the different windows;
  • zooming and panning;
  • far-end camera control;
  • various indications such as identification of the displayed location in a multipoint conference;
  • controls for conducting the conference, asking for the floor, etc.;
  • auxiliary cameras for viewing a blackboard, objects, etc.;
  • videotape recorder for displaying and sending any sequence of images, or recording the meeting;
  • telewriting facility;
  • pointer;
  • multiple loudspeakers;
  • equipment for encryption and decryption of user signals.

  1. Applications

  • Some possible applications are indicated here as examples:
  • the unconducted meeting between distant parties;
  • a formal conducted meeting between distant parties;
  • panel discussion;
  • elaboration of a document, with or without cooperative document handling;
  • presentation of a report in a large company with subsequent discussion;
  • presentation of an object (e.g. a new product) and comments on this;
  • negotiation of a contract with possible private discussions or advice from invited experts;
  • lecturing;
  • distant education or training;
  • tele-auction sale.

The post Multimedia Conference Services appeared first on Sanmati4.

]]>
https://sanmati4.com/multimedia-conference-services/feed/ 0 10609
Jitter Components https://sanmati4.com/jitter-components/ https://sanmati4.com/jitter-components/#respond Tue, 08 May 2018 15:00:40 +0000 https://sanmati4.com/?p=10599 Jitter Components The recent improvement to semiconductor device performance has seen bit rates of 28 Gbps become commonplace.  When using...

The post Jitter Components appeared first on Sanmati4.

]]>
Jitter Components

The recent improvement to semiconductor device performance has seen bit rates of 28 Gbps become commonplace.  When using high-speed signals like the  28 Gbps band, the impact of jitter components from various sources in the surrounding environment on the transmission quality cannot be ignored. As a result, accurate evaluation of device characteristics requires testing by injecting multiple types of jitter on the device under test (DUT). Previously, jitter tolerance tests for the optical market required only applying of sinusoidal jitter  (SJ).  More recently, applying just SJ has caused difficulties in performing the accurate evaluation of device characteristics including the impact of the surrounding environment. This Application Note explains each type of jitter, gives some guidance about measurements for complex jitter tests and describes some concrete examples of jitter tolerance measurements using the Anritsu MP1900A.

Definition of Jitter Component Types

  • SJ (Sinusoidal Jitter)

Sinusoidal Jitter is jitter with a single frequency component; it is the most basic jitter component of jitter tolerance tests. There is various jitter frequency components in the natural world and SJ is used for confirming the jitter tolerance at each frequency.

In transmission methods where the Data and Clock are not transmitted in parallel,  the Data signal generated from the transmitting device is retimed by the Clock Recovery circuit at the receiving device. The jitter tolerance characteristics are a key index in evaluating this re-timing operation. Generally, Clock Recovery uses an internal Phase Locked Loop (PLL) circuit. Figure 2.1.1 shows a Clock Recovery circuit.

Similar to a basic PLL circuit, the Clock Recovery circuit determines the Loop Bandwidth (Figure 2.1.2). When the Loop Bandwidth is wide, the jitter tolerance is excellent; a shorter Clock Recovery circuit lock time has merits but on the other hand, it increases the amount of carrier jitter in circuits downstream of the Clock Recovery. If the Clock Recovery lock time is short so as to recover the normal signal status, only a short time is required until the entire system recovers normal operation when the signal input to the Clock Recovery is lost or when the frequency has slipped momentarily.  Although a wide loop band achieves this short lock time, a disadvantage is that jitter in the input clock is easily transferred to circuits downstream of the Clock Recovery. For example, stacking-up several wide Loop Bandwidth Clock Recovery circuits may cause jitter summing in subsequent stages, risking incorrect operation of the entire system.

At a jitter tolerance test, it is important to confirm that the Loop Bandwidth of the above-described PLL is in accordance with the design and the degree to which it varies; as already described, the Clock Recovery has its own unique Loop Bandwidth. When the input Data signal jitter frequency and amount are within the Loop Bandwidth, an error does not occur due to a phase mismatch because recovered clock tracks input Data signal in terms of jitter. In other words, for the input Data and recovered Clock to have the same amount jitter, the relationship between the Clock and Data at D-FF  in  Figure  2.1.1 must be maintained so that no errors occur.

However, when the jitter in the input signal is out-of-band, the recovered Clock jitter is suppressed to be smaller than the jitter in the input Data signal. As a result, the relationship between the Clock and Data at D-FF in Figure 2.1.1 is changed and an error occurs. SJ is the most basic jitter component used at the jitter tolerance test for confirming the performance

limits caused by errors by changing the modulation frequency and amount in this manner. In addition, the jitter tolerance test for PCI Express specifies applying two SJs; the MP1900A supports either one or independent two SJs addition using option configurations.

Moreover, it can apply sufficient SJ of 1UI at jitter tolerance tests in the high-speed modulation area for modulation frequencies above 10 MHz.

  • RJ (Random Jitter)

Random Jitter is a jitter component that is generated by noise effects that have no dependent relationship with frequency, such as thermal noise commonly occurring within systems, and it covers a wide frequency range.

For RJ, the CEI 3.0 jitter tolerance test standardizes the use of a High Pass Filter (HPF) to remove RJ components within the PLL band as shown below and applies only PLL out-of-band components as the load.

Using this standard, the HPF is set for frequency components below 10 MHz but it is necessary to apply frequency components at least exceeding the CDR bandwidth. The maximum amount of applied RJ is half the Baud rate so if the Baud rate is 28 Gbps, the applied RJ max. is 14 GHz. However, the actual CDR bandwidth does not extend to half the Baud rate, and it is usually sufficient to test a range of about 10 times the normal PLL bandwidth. Consequently, if the measuring instrument RJ band is from 100 to 200 MHz, the range required by the test will be sufficiently covered. As described in item 2.1 above, this is because jitter accumulates as the CDR bandwidth becomes wider, causing overall system instability.

The MP1900A has a built-in HPF and LPF for easy RJ injection. The built-in HPF and LPF should be used in combination to increase the reproducibility of jitter tolerance tests instead of attaching external filters. Additionally, RJ has an MP1900A Filter setting item; when PCIe is selected at Filter, the amounts of RJ (ps rms) for the Low and High-Frequency bands required by PCI Express can be set independently.

  • BUJ (Bounded Uncorrelated Jitter)

Bounded Uncorrelated Jitter is generally jitter caused by crosstalk caused by nearby Data signals. When it is generated from a measuring instrument, a PRBS signal generated from an independent clock source that is unaffected by correlation from Data signals is used as the measurement target. If a PRBS15 signal is used as the measured Data signal, it is better to use something other than PRBS15 to avoid interference.

Moreover, it is best to set a value that is a multiple of the measured target bit rate to suppress interference even at the BUJ PRBS bit rate. The CEI 3.1 standard recommends using PRBS31 as a general test pattern as well as for jitter tolerance tests using measured target Data signals. The BUJ PRBS patterns are from 7 to 11 stages with baud rates of 1/10 to 1/3 of the measurement target; the standard recommends using an LPF of 1/20 to 1/10 of the BUJ PRBS baud rate.

An example setting prescribed by the existing CEI 3.0 standard when using a 28-Gbps PRBS31 signal as the measurement target Data signal is described below. Since the measurement target is 28 Gbps, the required baud rate is from 1/10 to 1/3 of this, or from 2.8 Gbps to 9.3 Gbps; since the LPF is 1/20 to 1/10 of the BUJ baud rate, the value is set from 140 MHz to 930 MHz. Moreover, the BUJ PRBS is selected from 7, 9 or 11 stages. If PRBS 7 stages is used as the measurement target signal, it is better to use either 9 or 11 stages for the BUJ instead of using the same PRBS pattern as the measurement target.

The following table 2.3.1 shows the range of baud rate settings when applying BUJ using the MP1900A.

Table 2.3.1 BUJ Settings

Items

Value

Note

PRBS

7, 9, 11

Uses different pattern from the main signal

Bitrate

1/10 to 1/3

2.8 to 9.3 Gbps (@28 Gbps)

LPF

1/20 to 1/10

140 to 930 MHz (@28 Gbps)

Table 2.3.2 MP1900A BUJ Baud Rate Setting Range

Baud rate (Gbps)

Step (kbps)

0.1 to 3.2

1

4.9 to 6.25

1

9.8 to 12.5

1

Using the previous example, BUJ baud rate for 28 Gbps is from 2.8 Gbps to 9.3 Gbps, the highest baud rate for BUJ is 6.25 Gbps. In this case, the standard specifies a setting range from 312.5 MHz to 625 MHz for the LPF.    Since the MP1900A LPF used for BUJ can be selected from 500 MHz, 300 MHz, 200 MHz, 100 MHz, and 50 MHz. 500 MHz should be selected in this example. Although the BUJ pattern can be selected from any of 7, 9, 11, 15, 23, and 31 when using the MP1900A, either 7, 9, or 11 is selected based on the CEI 3.0 standard.

  • Half Period Jitter (F/2 Jitter)

With the recent speed increases in semiconductor bit rates, semiconductor vendors are avoiding deployment of hard-to-handle full-rate-clocks in semiconductor devices and are instead using a Selector at the final output stage and there are increasing numbers of examples using a half-rate clock (Figure 2.4.1).

If the half-rate clock does not have a Duty of 50% at this time, the output Data cycle looks as shown in Figure 2.4.3 and the narrow and wide state is repeated at each bit. This is called Half Period Jitter (HPJ).

The factors causing the Clock Duty change are a drift in the Clock Buffer Threshold voltage in the semiconductor device as well as distortion in the Clock waveform which can be caused by inadequate bandwidth as well as other issues.

Half Period Jitter is used to confirm whether or not a signal that is output under these conditions can be received correctly by the receiver circuit.

Since the MP1900A has a function for applying HPJ on the Pulse Pattern Generator (PPG) output, it can be used to add HPJ while applying jitter types such as SJ, RJ, BUJ to create even more severe stress conditions. In determining the HPJ setting amount, to separate-out DJ (DDPWS) caused by ISI, it is first necessary to generate a 1010 Clock pattern as the Data signal, which is defined as HPJ, and then to calibrate the DJ.

 

The post Jitter Components appeared first on Sanmati4.

]]>
https://sanmati4.com/jitter-components/feed/ 0 10599
Video telephony services – General https://sanmati4.com/video-telephony-services/ https://sanmati4.com/video-telephony-services/#respond Tue, 08 May 2018 08:33:00 +0000 https://sanmati4.com/?p=10594 Video telephony services – General This is a brief introduction of the general features and...

The post Video telephony services – General appeared first on Sanmati4.

]]>
Video telephony services – General

This is a brief introduction of the general features and attributes of the video telephony services regardless of the network environment where the service might be provided. The video telephony services are classified in the following two main categories:

  • Video telephony service for narrow-band networks;
  • Video telephony service for broadband networks.

Recommendation F.721 covers the basic video telephony service in the Integrated services digital network (ISDN) due to this classification. Dedicated service Recommendations for higher quality video telephony services are for further study. The higher quality video telephony service will not necessarily employ any fixed information transfer rate as variable bit rate coding may be used. The classification of video telephony services is depicted in Figure 1/F.720.

Video telephony services

Video telephony service is an audiovisual conversational teleservice providing bidirectional symmetric real-time transfer of voice and moving colour pictures between two locations (person-to-person) via the networks involved.

The minimum requirement is that under normal conditions the picture information transmitted is sufficient for the adequate representation of fluid movements of a person displayed in head and shoulders view (see Note).

Note – The smoothness of the movements in the reproduced picture is essentially dependent on the amount of motion with respect to the transfer rate of transmitted picture information. The above requirements are supposed to be met under such conditions where the amount of motion is limited or the throughput is high enough not to impair the received picture. Degradation is likely to appear as increased blurring and jerkiness in the reproduced picture. Besides this, other artifacts may occur.

Description

General description

The plain video telephony service includes only the basic user requirements, namely speech and motion picture with essential controls and indications; the service can be enhanced with options providing auxiliary facilities such as transmission of high resolution still images of documents, photographs, drawings, charts, objects, etc. (see Note).

Note – Speech supported only by still picture transmission and/or telewriting is not considered as a part of the video telephony service. This type of communication may either be considered as a separate audiovisual service or as a form of audiographic conference service in a point-to-point configuration.

The video telephony service is likely to be used in much the same way as the ordinary telephone service for individual communication, the enhancement being in the visibility of the communicating parties which implies a number of possible new applications. An essential feature of the service is that it is always provided in conjunction with ordinary telephony allowing the user to intercommunicate with all kinds of audiovisual services by using merely the speech communication facility of a video telephone terminal. In other words, video telephone terminals must be capable of supporting telephony.

A video telephony service may also be used in applications such as communication of speech – and hearing –impaired persons using sign language and remote surveillance where the speech communication facility is of minor importance.

In case the service is provided in a network and terminal environment which offers a number of different quality levels depending, among others, on the transmission medium used and respective charging, it must be possible for the user to select the level/mode of operation and/or type of video telephony service he/she wishes and also change it during the call if provided by the network or supported by the terminals. The latter option may be provided as a supplementary service. Two different types of call shall be possible:

  • Point-to-point calls;
  • Multipoint calls (see Note).

Note – For multipoint calls a unit for mixing speech signals and/or combining video signals is required. This will be defined in another context later.

Description of various video telephony services

Two primary video telephony service categories have been identified, for narrow-band networks and for broadband networks. The principal features of either type of service are described below.

  1. Video telephony service for narrow-band networks

The video telephony service for narrow-band networks provides end-to-end communication of moving colour pictures with spatial resolution, temporal resolution and quality equivalent to that obtainable by coding the video signal according to Recommendation H.261 (QCIF and CIF format).

The video telephone service may be optionally enhanced by facilities such as the transfer of still pictures, graphics, text and end-to-end control messages.

A basic video telephone teleservice for the narrow-band ISDN has been fully standardized according to Recommendations I.210 and I.240. The stage 1 description for the video telephony teleservice for ISDN is contained in Recommendation F.721.

In the future, basic video telephony services for other narrow-band networks (e.g. radio mobile networks, private networks) could be envisaged.

  1. Video telephony service for broadband networks

The video telephony service for broadband networks provides end-to-end communication of moving color pictures with high spatial and temporal resolution and video quality equivalent to conventional TV standards (PAL, SECAM, NTSC) or better and enhanced voice/sound quality, possibly stereo transmission and optionally facilities for the transfer of still pictures, graphics, text and end-to-end control messages.

Applications related to video telephony

The video telephony service can be utilized in a broad range of applications depending on the Quality of Service for audio and/or video that can be achieved in different types of services.

A video telephony service employing the bearer capabilities of broadband networks is expected to meet the needs of the subsequently listed applications. Because of the high video quality, this service provides, besides the means for face-to-face dialogue, the possibility to transfer any kind of moving scenes. Also, pictures of three-dimensional objects, graphic material, e.g., sketches, drawings, photographs and documents containing text and graphics can be transferred without any restrictions. Commercial and domestic scenes, instruction procedures and films can be transferred to the communication partner.

The constraints imposed by limited transfer capability of narrow-band networks on spatial and temporal image resolutions make certain types of communication less applicable to the video telephony service for narrow-band networks.

Taking into account the aspects mentioned above, the following principle applications of video telephony are possible among others:

  1. Face-to-face dialogues involving at least head-and-shoulder images;
  2. Dialogue including interactive viewing of documents such as sketches, diagrams or charts and objects that can be shown on the screen;
  3. Access of the user to videoconferences;
  4. Remote video surveillance;
  5. Communication between hearing and speech impaired persons using the sign language.

Based on these examples, other enhanced videotelephone applications may also emerge. The user should be given the possibility to select the essential parameters best suiting his specific application.

For instance, in applications b) high spatial resolution is required and on the other hand in e), good motion tolerance is important. Moreover, the transfer rate used for audio should be selectable, in particular when it has charging or video quality implications.

Supplementary services and enhancements

The same spectrum of supplementary services supporting telephony is in principle applicable for Videotelephony. Other supplementary services or enhancements, dedicated to video telephony, e.g. video telephony conference services, are for further study. Relevant enhancements are those supporting the transfer of high resolution still pictures and graphics information in a variety of formats like digitized video or other standard document formats. A paper/film scanner or video frame capture may be used as an image input source. Besides this, the possibility to access electronic mail or videotex services with a video telephone terminal may be supported.

Specific terminology

Fall back: Procedures performed either by the network or by the calling terminal that allows the calling user to be connected in any case with the called user at any terminal where the call is offered (3.1 kHz telephone terminal or video telephone terminal).

Terminal aspect

General systems requirements

In order to perform the basic functions necessary for the video telephony service the terminal equipment must include devices capable of:

  • capturing participant’s picture(s);
  • displaying remote user’s picture(s);
  • capturing audio;
  • reproducing audio;
  • audio coding;
  • video coding;
  • management of network interfaces.

The terminal equipment also includes devices capable of performing the following functions:

  • user control;
  • user indication;
  • self-view;
  • testing

Terminals intended to be used in multipoint connections may need additional basic functions related to the multipoint operation. These functions are for further study.

Video telephone terminal equipment

The basic video telephone terminal equipment may include only the basic elements listed in 6.1.

Possible enhancements to the equipment are:

  • orientable camera and zoom;
  • still picture camera;
  • interfaces for an additional camera, an additional screen or a video recorder;
  • remote control of a distant camera for some specific applications;
  • keyboard for the videotex service;
  • telewriting

This list is not exhaustive and is only given as a set of examples. As a general rule, the number of controls that a user has to operate should be kept to a minimum. Training should not be necessary for using the terminal. Clear and concise instructions should be given, for instance on the screen, especially for the supplementary services with which the user may not be very familiar.

The post Video telephony services – General appeared first on Sanmati4.

]]>
https://sanmati4.com/video-telephony-services/feed/ 0 10594
Audiovisual Services – Framework https://sanmati4.com/audiovisual-services-framework/ https://sanmati4.com/audiovisual-services-framework/#respond Tue, 08 May 2018 07:09:33 +0000 https://sanmati4.com/?p=10582 Audiovisual Services – Framework A number of services are or will be, defined in CCITT having...

The post Audiovisual Services – Framework appeared first on Sanmati4.

]]>
Audiovisual Services – Framework

A number of services are or will be, defined in CCITT having as their common characteristic the transmission of speech together with other information reaching the eventual user in visual form. This Recommendation H.200 concerns a set of such services which should be treated in a harmonized way; it is convenient to refer to the members of this set as ” audiovisual services “(abbreviated to AV services).

Framework

Recommendations in the H.200 set are arranged in three main sections:

  1. Service definitions – These specify the service as seen by the user, including basic service, optional enhancements, quality, and intercommunication requirements, together with operational aspects; technical implementation methods are taken into account but not defined herein.
  2. Infrastructure – This section includes all the Recommendations which are applicable to two or more distinct services: these encompass network configuration, frame structures, control/indications, communication/intercommunication, and audio/video coding. The “infrastructure” includes this generality of signals which flow on unrestricted digital bearers on established network connections – it does not include the methods of call establishment and control, orchestrated by signals outside these bearers.
  3. Systems and terminal equipment – This section deals with the technical implementation of specific services: it, therefore, includes service-specific equipment for the application layer and draws upon the infrastructure Recommendations to identify the detailed processes required for the particular service.

An “other aspects” section is included, covering such matters as call control, including aspects which are particular to AV services but, some involving out-of-band signals, do not come within the scope of the infrastructure section above.

Framework for Recommendations for audiovisual services 

A.1

Service definition

Recommendation No

AV.100

General AV services

Draft available

AV.101

Teleconference service

F.701 (F.710

in Blue Book)

AV.110

General principles for audiographic conference service

F.710

AV.111

Audiographic conference service

F.711

AV.120

Videotelephony services – General

F.720

AV.121

Basic narrow band videophone service in the ISDN

F.721

AV.122

Broadband videotelephony services

F.722

AV.130

Videoconference service – General

F.730

AV.131

Videoconference services – General

 

AV.132

Broadband videoconference services

F.732

AV.140

Audiovisual interactive services – General

F.740

AV.150

(Other AV services not yet defined)

 

AV.160

(Audiovisual service applications)

 

AV.161

Service-oriented requirements for telewriting applications

F.761 (F.730

in Blue Book)

 

A.2

Infrastructure

Recommendation No

AV.200

(General AV infrastructure)

 

AV.210

(Reference networks)

 

AV.220

(Transmission multiplex structure)

 

AV.221

 

A frame structure for a 64 to 1920 kbit/s channel in audiovisual teleservices

H.221

AV.230

 

Frame-synchronous control and indication signals for audiovisual systems

H.230

AV.231

 

Multipoint control units for audiovisual systems using digital channels up to 2 Mbit/s

H.231

 

AV.232

(Broadband multipoint control)

H.233

AV.233

Confidentiality system for audiovisual services

 

AV.240

(Communication – Principles)

 

AV.241

System aspects for the use of the 7 kHz audio codec
within 64 kbit/s

G.725

 

AV.242

System for establishing communication between audiovisual terminals using digital channels up to
2
Mbit/s

H.242

 

AV.243

Procedures for establishing communication between
three or more audiovisual terminals using digital
channels up to 2 Mbit/s

H.243

 

AV.250

(Audio Coding)

 

AV.251

Pulse code modulation (PCM) of voice frequencies

G.711

AV.252

7 kHz audio-coding within 64 kbit/s

G.722

AV.253

(Audio coding at 24/32 kbit/s)

 

AV.254

Coding of speech at 16 kbit/s using low-delay code
excited linear prediction

G.728

 

AV.255

Audio coding for storage/retrieval

MPEG audio

(DIS 11172 Part 3)

AV.260

(Video Coding)

 

AV.261

Video codec for audiovisual services at p × 64 kbit/s

H.261

AV.262

(Video coding for use on B-ISDN)

 

AV.263

Video coding for storage/retrieval up to about
1 Mbit/s

MPEG-1 Video

(DIS 11172 Part 2)

AV.264

(Video coding for storage/retrieval at less than
10 Mbit/s)

MPEG-2

AV.266

(Video coding for distribution)

 

AV.270

Overview of AGC Recommendations

T.120

AV.271

Audiographic conferencing

T.121

AV.272

Generic conference control

T.124

AV.273

Protocol stacks for audiographic and audiovisual teleconference applications

T.123

 

AV.274

Multipoint communications service

T.122

AV.280

(For future purposes)

 

AV.290

(Interworking with pre-existing systems)

 

AV.291

(Interworking with H.120/H.130 systems)

 

 

A.3

Systems and terminal equipment

Recommendation No

AV.300

(General AV systems/terminals)

 

AV.310

(TC systems and equipment)

 

AV.311

Audiographic teleconference

Draft available

AV.320

Visual telephone systems and equipment

H.320

AV.321

(Broadband visual telephone)

 

AV.330

(Equipment for AV retrieval, systems)

 

AV.331

Broadcasting type multipoint systems

H.331

 

A.4

Other aspects

Recommendation No

AV.400

(Other aspects)

 

AV.410

(Reservation systems)

 

AV.420

[Call control (including multipoint)]

 

AV.440

(Multipoint call set-up)

 

List of audiovisual services covered

The following audiovisual services shall be included in the harmonized set:

  • narrowband videophone (p × 64 kbit/s);
  • broadband videophone (a teleservice for broadband ISDN);
  • narrowband videoconferencing (p × 64 kbit/s);
  • broadband video conferencing (a teleservice for broadband ISDN);
  • audiographic teleconferencing;
  • telephony (a degenerate case of an AV service, included for intercommunication purposes);
  • telesurveillance

The following audiovisual services are in the process of being defined, and consideration should be given to their inclusion in the set for either of the reasons given in 2.

  • video mail;
  • videotex (including pictures and sound);
  • video retrieval;
  • high-resolution image retrieval;
  • distribution services.

List of networks covered

The possible use of the following networks is taken into account:

  • ISDN (basic and primary access);
  • B-ISDN (using ATM);
  • analogue (PSTN);
  • LANs (e.g. FDDI);
  • mobile/radio;
  • leased lines (digital).

 

You may interested

Cisco Unified Voip 9971 Phone Charcoal  Cisco 7945G Two Line Color Display IP Phone, CP-7945G (Certified Refurbished)  RFID Password Video Door Phone Intercom System  Video Door Phone System 

 

The post Audiovisual Services – Framework appeared first on Sanmati4.

]]>
https://sanmati4.com/audiovisual-services-framework/feed/ 0 10582
Visual telephone systems and terminal equipment https://sanmati4.com/visual-telephone-systems/ https://sanmati4.com/visual-telephone-systems/#respond Mon, 07 May 2018 13:46:29 +0000 https://sanmati4.com/?p=10563 Visual telephone systems and terminal equipment  This is a brief introduction of H.320 and this...

The post Visual telephone systems and terminal equipment appeared first on Sanmati4.

]]>
Visual telephone systems and terminal equipment 

This is a brief introduction of H.320 and this Recommendation H.320 covers the technical requirements for narrow-band visual telephone systems services defined in H.200/F.720-series Recommendations, where channel rates do not exceed 1920 kbit/s.

NOTE – It is anticipated that this Recommendation H.320 will be extended to a number of Recommendations each of which would cover a single video conferencing or videophone service (narrow-band, broadband, etc.). However, large parts of these Recommendations would have identical wording, while in the points of divergence the actual choices between alternatives have not yet been made; for the time being, therefore, it is convenient to treat all the text in a single Recommendation.

The service requirements for visual telephone services are presented in ITU-T Recs F.720 for video telephony and F.702 for a video conference; video and audio coding systems and other technical aspects common to audiovisual services are covered in other Recommendations in the H.200/F.700-series.

System description

A generic visual telephone system is shown in Figure 1. It consists of terminal equipment, network, Multipoint Control Unit (MCU) and other system operation entities.

visual telephone system

 

A configuration of the terminal equipment consisting of several functional units is also shown in Figure 1. A video I/O equipment includes cameras, monitors and video processing units to provide functions such as split-screen scheme. Audio I/O equipment includes microphones, loudspeakers and audio processing units to provide such functions as acoustic echo cancellation (see ITU-T Rec. G.167). Telematic equipment includes visual aids such as electronic blackboard, text conversation facility and still picture transceiver to enhance basic visual telephone communication.

The system control unit carries out such functions as network access through end-to-network signalling and end-to-end control to establish a common mode of operation and signalling for proper operation of the terminal through end-to-end signalling. The video codec carries out redundancy reduction coding and decoding for video signals, while audio codec does the same thing for audio signals. The delay in the audio path compensates video codec delay to maintain lip synchronization.

The mux/demux unit multiplexes transmitting video, audio, data and control signals into a single bit stream and demultiplex a received bit stream into constituent multimedia signals. Network interface makes necessary adaptation between the network and the terminal according to the user-network interface requirements defined in the I.400-series Recommendations (see Note).

NOTE – For leased line networks, the network interface is defined in ITU-T Rec. G.703 for bit rates in the range of 64 kbit/s to 2048 kbit/s. An alternative interface is defined in ITU-T Rec. X.21. For n × H0 channels, timeslot allocation is given in clause 5/G.704 for the G.703 interface. It is stressed that interworking towards ISDN requires synchronous operation of the leased line network.

Signals

Visual telephone signals are classified into video, audio, data and control as follows:

  • Audio signals are continuous traffic and require real-time transmission. NOTE – In order to reduce the average bit rate of audio signals, voice activation can be introduced (in which case the audio signals are no longer continuous).
  • Video signals are also continuous traffic; the bit rate allocated to video signals should be as high as possible, in order to maximize the quality within the available channel capacity.
  • Data signals include still pictures, facsimile and documents, or other facilities such as text conversation; this signal may occur only occasionally as required and may temporarily displace all or part of the audiovisual signal content. It should be noted that data signals are associated only with optional enhancements to the basic visual telephone system; therefore, the opening of a path to carry such signals is preceded by negotiation between the terminals.
  • Control signals are some system control signals by definition. The path for the terminal-to-network control signals is provided in the D-channel, while the path for the terminal-to-terminal control signals are provided in BAS or service channel only when necessary by the mechanism defined in ITU-T Rec. H.221.

 

The post Visual telephone systems and terminal equipment appeared first on Sanmati4.

]]>
https://sanmati4.com/visual-telephone-systems/feed/ 0 10563
Broadband Audiovisual Communication – Systems and Terminals https://sanmati4.com/broadband-audiovisual-communication/ https://sanmati4.com/broadband-audiovisual-communication/#respond Mon, 07 May 2018 06:40:23 +0000 https://sanmati4.com/?p=10548 BROADBAND AUDIOVISUAL COMMUNICATION –  SYSTEMS AND TERMINALS  This Recommendation H.310 covers the technical requirements for the...

The post Broadband Audiovisual Communication – Systems and Terminals appeared first on Sanmati4.

]]>
BROADBAND AUDIOVISUAL COMMUNICATION –  SYSTEMS AND TERMINALS 

This Recommendation H.310 covers the technical requirements for the systems and terminals of broadband audiovisual communication services defined in H.200/AV.100-series Recommendations.

This H.310 defines both unidirectional and bidirectional broadband audiovisual terminals. The classification of H.310 terminals into different terminal types is based on audiovisual and ATM adaptation layer capabilities which are defined in 6.2. There are two classes of unidirectional terminals: Receive-Only Terminal (ROT) and Send-Only Terminal (SOT) classes.

In this H.310, bidirectional terminal types are referred to as Receive-and-Send Terminal (RAST) types. The definition of H.310 RAST terminals is based on the following interoperability principles:

  1. Interworking between H.310 RAST terminal types and other N-ISDN/B-ISDN (H.320/H.321) audiovisual terminals is mandatory.
  2. Interworking among the different H.310 RAST terminal types is also mandatory.

Three types of RAST terminals are defined: RAST-1, RAST-5, and RAST-1&5.

RAST-1 and RAST-1&5 terminals may be connected to public networks and customer premise networks (private networks), while RAST-5 terminals may only be connected to customer premise networks (private networks).

For interworking with H.320/H.321 terminals, all three RAST terminal types support common H.320 audiovisual modes. For interworking between RAST-5 terminals and RAST-1 and H.320/H.321 terminals, a gateway, that is not inside the public network but in the customer premises, between a B-ISDN and a customer-premises ATM network is needed to provide interoperability functions.

The video and audio coding and other technical aspects that are applicable to more than one distinct service are covered in H.200/AV.200-series Recommendations.

Figure 1 shows a generic broadband audiovisual communication system. It consists of terminal equipment, network, Multipoint Control Unit (MCU) and the constituent elements of the terminal equipment. The corresponding Recommendations are also identified.

broadband audiovisual communication system

All H.310 terminals are required to support H.245 as their communication control protocol so that they can support their intended services and interoperate with each other. Accordingly, H.310 terminals shall use the H.222.1 acknowledged procedures for the subchannel signalling.

It is important to note that the generic H.310 terminal shown in Figure 1 can represent any of the unidirectional or bidirectional terminal types defined in this Recommendation.

The definition of H.310 terminal types is intended for the support of the following applications:

  • Conversational services (e.g. video conferencing and video telephony services).
  • Retrieval services.
  • Messaging services.
  • Distribution services with the individual presentation by the recipient (e.g. video-on-demand services).
  • Distribution services without the individual presentation by the recipient (e.g. broadcast TV services).
  • Video transmission.
  • Surveillance 

System description

System configuration

The interaction among the H.310 terminal capabilities is based on the protocol reference model

shown in Figure 2, which illustrates the protocol stacks for the audiovisual, data, call management (DSS 2 and H.245), and other control and indication signals that can be supported by the different terminal types of this Recommendation.

protocol stacks for the audiovisual, data, call management

Recommendation

Title

Q.2931

User network interface layer 3 specifications for basic call/connection control

Q.2941.1

DSS 2 user-generated identifiers

Q.2961

Support of additional traffic parameters

Q.2961.2

ATM transfer capability coding in the broadband bearer capability information element

Q.2962

Negotiation of traffic and QoS parameters (during call/connection establishment)

Q.2963

Renegotiation/modification of traffic and QoS parameters (for already

established calls/connections)

Q.2964

B-ISDN look-ahead

Q.2971

Point-multipoint call/connection control

Q.298x

Multiconnection calls

 

Terminal types

This Recommendation defines both unidirectional and bidirectional broadband audiovisual terminals.

The classification of H.310 terminals into different terminal types is based on audiovisual and AAL capabilities as summarized in Table 2.

 

 

AAL

AAL 1

AAL 5

AAL 1& 5

Audiovisual

transport

Unidirectional

ROT

ROT-1

ROT-5

ROT-1&5

SOT

SOT-1

SOT-5

SOT-1&5

Bidirectional

RAST

RAST-1

RAST-5

RAST-1&5

Unidirectional terminal types (ROT and SOT)

Two classes of unidirectional terminals are defined: Send-Only Terminal (SOT) and Receive-Only Terminal (ROT).

Three types of H.310 unidirectional terminals are defined, based on their supported AALs, for each of the two classes. The H.310 defined unidirectional terminal types are: 

  • 310 ROT-1 and SOT-1 which support AAL 1;
  • 310 ROT-5 and SOT-5 which support AAL 5;
  • 310 ROT-1&5 and SOT-1&5 is a composite terminal supporting both AAL 1 and AAL 5.

Each of these terminal types shall support the H.310 native communication mode. The native communication mode consists of H.222.1, with ISO/IEC 11172-3 Layer 2, H.262 and H.245 as the audio, video and control protocols.

Each of these terminals may be connected to public B-ISDN and customer premise networks (private networks).

NOTE – Some pairs of a unidirectional terminal will not interwork with each other. This may be due to incompatible class, such as ROT-1 connecting with ROT-1&5, or due to incompatible type, such as ROT-1 connecting to SOT-5.

 

Bidirectional terminal types (RAST)

Three types of H.310 bidirectional receive and send terminals (RAST) are defined based on their

communication modes and supported AALs. The H.310 defined terminal types are:

  • H.310 RAST-1 which supports AAL 1;
  • H.310 RAST-5 which supports AAL 5;
  • H.310 RAST-1&5 is a composite terminal supporting both AAL 1 and AAL 5.

Each of these terminal types shall support an H.310 native communication mode as well as an H.320/H.321 interoperation mode. Figure 3 depicts the protocol stacks for these two modes for each of the terminal types.

protocol stacks

The H.310 RAST-1 terminal supports AAL 1. Its native H.310 communication mode consists of H.222.1, with G.711, H.262, and H.245 as the audio, video, and control protocols. Its H.320/H.321 interoperation mode supports the full H.321/Annex A protocol stack. The H.310 RAST-5 terminal supports AAL 5. Its native H.310 communication mode consists of  H.222.1 with G.711, H.262, and H.245 as the audio, video and control protocols. Its H.320/H.321 interoperation mode supports the full H.321/Annex B protocol stack.

The H.310 RAST-1&5 is a composite of the RAST-1 and RAST-5 terminal types and supports all four modes described above.

The RAST-1 and RAST-1&5 terminals connect to public B-ISDN and customer premise networks (private networks) and can interwork with H.320 via an I.580 interworking unit and directly with H.321/Annex A terminals. The RAST-5 terminal connects to customer premise networks (private networks), can directly interwork with H.321/Annex B terminals and requires a gateway to interwork with H.320, H.321/Annex A and H.310 RAST-1 terminals. See clause 12 for interworking scenarios.

Terminal capabilities

The definition and classification of H.310 terminal types and their communication modes are based on the following capabilities:

  • Audiovisual and Data;
  • Network Adaptation;
  • Signalling (both user-to-user and user-to-network).

A communication mode is defined as a combination of certain parameters of the above capabilities.

Based on the different capabilities of H.310 terminals, two classes of communication modes are specified:

  • 320/H.321 interoperation modes;
  • native H.310 communication modes.

Unidirectional H.310 terminals need only support native H.310 communication modes of operation, that is, unidirectional terminals may optionally support the H.320/H.321 interoperation modes.

At the start of the call, H.310 terminals shall identify the remote terminal type (H.320/H.321, H.310 bidirectional, etc.) via exchange of Q.2931 information elements, and shall use either H.245 or H.242 to perform capability exchange and other procedures.

This Recommendation mandates the support of particular functionalities by the different terminal types. However, this does not imply that a particular communication mode shall be used by that terminal type during a given communication session. For example, RAST terminals shall support H.261 video capabilities for interworking with H.320/H.321 terminals, but the use of H.261 in the native mode, that is, when H.222.1 is used, is optional.

The following subclauses describe mandatory and optional capabilities. Optional capabilities are included as guidelines for implementations and are in no way intended to be exhaustive lists of what may be implemented.

The use of the H.245 control channel

All H.310 terminals shall support H.245 messages and procedures in the native H.310 communication mode. The exact set of H.245 messages and procedures that are mandated in H.310 terminals, and their usage, are specified in this subclause.

The H.245 control channel carries end-to-end control messages governing the operation of the H.310 system, including capabilities exchange, opening and closing of logical channels, mode preference requests, round-trip-delay, maintenance loop and master-slave determination.

There shall be exactly one control channel in each direction within H.310 systems, which shall use the messages and procedures of Recommendation H.245. The H.245 control channel shall be set up at the beginning of communication, before the transmission of audiovisual information.

Recommendation H.245 specifies a number of independent protocol entities which support terminal-to-terminal signalling. A protocol entity is specified by its syntax (messages), semantics, and a set of procedures which specify the exchange of messages and the interaction with the user. H.310 terminals shall support the syntax, semantics and procedures of the following protocol entities, as specified in the following subclauses:

  • Master-slave determination.
  • Capabilities exchange.
  • Logical channel signalling.
  • Bidirectional logical channel signalling.
  • Close logical channel signalling.
  • Mode request.
  • Round trip delay determination.
  • Maintenance loop signalling.
  • Specific commands and indications.

Figure 4 shows the interaction between the H.245 protocol entities and H.310.

H.245 protocol entities and H.310

All H.245 messages are conveyed by the underlying protocol stack, as specified in Annex A, which provides a reliable end-to-end transmission of H.245 messages using acknowledgement of correct receipt within each layer protocol.

H.310 terminals shall be capable of identifying all H.245 MultimediaSystemControlPDU messages and shall respond to all messages needed to realize required H.310 functions. H.310 terminals shall send the FunctionNotSupported message in response to the unrecognized request, response, command or the H.245 message which is not supported by the H.310 terminal.

Non-standard capabilities and control messages may be issued using the NonStandardParameter structure defined in Recommendation H.245. Note that while the meaning of non-standard messages is defined by individual organizations, equipment built by any manufacturer may signal any nonstandard message, if the meaning is known.

All timers defined in Recommendation H.245 should have periods of at least the maximum data delivery time allowed by the layer carrying H.245, including any retransmissions.

The post Broadband Audiovisual Communication – Systems and Terminals appeared first on Sanmati4.

]]>
https://sanmati4.com/broadband-audiovisual-communication/feed/ 0 10548