Home / Resources / How to…

You may need to know certain concepts, capabilities, behavior, features of EnableX beyond API. Some of these may be used in development, integration & implementation of your Application.

Table of Contents

How to build my first Video Chat Application?

To get you started quickly with developing your first Video Chat Application, we have shared sample Application Code through Github for your to explore, edit and run your application. We have shared Sample Application using all 4 type of Toolkits, viz. Web, Android, iOS and React Native Toolkit written in different programming language for respective platform.

Important Notes:

  • Each Application Repository has README.md file. Read through it to understand pre-requisites for the Application to work.
  • Configure the application accordingly.
  • Web Browser based Applications has both Client and Server Components.
  • Mobile Toolkit based Applications has only Client Component. Therefore, you would also need a Server Component taken from any of the Web Browsed Based Application’s Repository to setup your Application Server with which the Client Application would communicate/

Workflow: The following are the basic steps taken by all apps to see each others video:

  • Get Token: Client end Application needs a Token to get connected to virtual Room hosted at EnableX Server to carry out RTC Communication. For this, Client End Application calls your Application Server which in turn gets a Token from EnableX using Server API call and returns to Client.
  • Connect & Join Room: The Token received is used in the Client Application, written using related Toolkit to get connected to the Room.
    Subscribe to Remote Streams: Once connected, Client End needs to subscribe to all available Remote Participant’s Streams in the Room.
  • Handle Active Talker: Client End Point repeatedly receives updated list of participants talking in the Room. Using which UI can be drawn to play Remote Streams in Video Player. Know more about Active Talkers.

How to handle Active Talkers in a Session?

During video conferences session, EnableX Active Talker feature automatically detects which person(s) is/are speaking and notifies all connected users with a ordered list of users talking i.e. Active Talkers. Therefore, whenever there is an change in list of Talkers, EnableX keeps notifying all users with a list of users information who are currently speaking.

In a multi-party video conference, more server and Client-End-Point resources are consumed. Therefore, EnableX sends up to a maximum of 6 streams of most recent Active Talkers (Generating sound) in the Room to each end-point. These Streams are numbered from Stream#1 to Streeam#6. In addition to Active Talkers, EnableX sends 2 more streams, viz. Screen Share (Stream# 11) and Canvas Stream (Stream# 21).

The list of Active-Talkers is received at an endpoint in JSON format whenever there is a change in the Active Talker list. The Active Talkers list will have the talkers in ascending order i.e. the most recent Talker will be listed first.

How you receive Active Talkers List?

  • Through Event Notifications in Web Toolkit: active-talkers-updated
  • Through Delegate Method in iOS Toolkit: -activeTalkerList
  • Through Callback Event in Android Toolkit: -onActiveTalkerList

Note:

  • Active talker List received at an end point will never contain the local stream. Therefore,
    • Self-stream can be played locally in case you have to.
    • Active Talker List will be different for each user. The most recent 6 Talkers will receive 5 talkers information on their end point; whereas all other users will receive 6 talkers information in their list.
  • A Client’s End Application may opt to receive less Active Talkers stream, if so desired.
  • If a Room is defined with Canvas Streaming enabled, there will be 1 less talker to accommodate bandwidth / resources for Canvas Streaming. As such, the 5 most recent Talkers will receive 4 talkers information on their end point; where all other users will receive 5 talkers in the list.

Active Talkers List JSON

{    "active" : true,
      "activeList": [
           {   "clientId" : "String", 
               "mediatype" : "audiovideo", // Enum: audiovideo, audioonly
               "name" : "String",
               "reason" : "user",
               "streamId" : Number,
               "videoaspectratio" : "16:9", 
               "videomuted" : Boolean
           }
       ]
   }

This Active Talkers JSON is used for managing the UI and to play audio/video streams from remote users. To play a specific stream, you need the  Remote Stream object  related to the streamId of a Talker. Please do note that before you can play any stream from the Active Talkers list, you should have subscribed to all “dummy” streams available in the Room’s meta information.

To handle Active Talker using Web Toolkit:

room.addEventListener('active-talkers-updated', function (event) {
       TalkerList = event.message.activeList;
       for (var i = 0; i < TalkerList.length; i++) {
           if (ATUserList[i] && ATUserList[i].streamId) {
               var stream  = room.remoteStreams.get(ATUserList[i].streamId);
               var stream_id   = ATUserList[i].streamId;
               var username    = ATUserList[i].name;
               stream.play("DOM_ELEMENT_ID", PlayerOptions);
           }
       }                       
 }

To handle Active Talker using Android Toolkit:

public void onActiveTalkerList(JSONObject JSONTalkers) {
	JSONArray talkers = JSONTalkers.getJSONArray("activeList");
	Map<String, EnxStream> remoteStreams = room.getRemoteStreams(); 
	// get all remote stream
   
	for (int i = 0; i < talkers.length(); i++) { 
		JSONObject talker = talkers.getJSONObject(i);
		String strteamID = talker.getString("streamId");
		String streamName = talker.getString("name");

		// Get remote stream from streamid
		EnxStream stream = remoteStreams.get(strteamID);
		JSONObject attributes = stream.getAttributes();
		attributes.put("name", streamName); // Setting name of Stream

		// init player view
		EnxPlayerView playerView = new EnxPlayerView(
			    Current-Class-Context, ScalingType, mediaOverlay);

		// Attach stream to playerview 
		stream.attachRenderer(playerView);
		yourView.addView(playerView); // Add playerView to your view
	}
}

To handle Active Talker using iOS Toolkit:

// Updated Active Talkers list received
-(void)room:(EnxRoom *)room activeTalkerList:(NSArray *)Data{
	NSDictionary *responseDict = Data[0];
	NSArray *activeTalkerArray = responseDict[@”activeList”] ;
	
	// Loop through Active Talker List
	for (int i = 0; i < activeTalkerArray.count; i++) {
          NSDictionary *activeTalkerDict = activeTalkerArray[i];

	  // Get remote stream from streamid
          NSString *streamId = activeTalkerDict[@”streamId”];
          EnxStream *stream = room.streamsByStreamId[streamId]; 
	  
	  // init player view
	  EnxPlayerView *playerView  = [[EnxPlayerView alloc] initRemoteView:(CGRect)];
          [stream attachRenderer:playerView];
          [playerView setDelegate:self];
	}
}

Note: The local Stream will not be included in the Active Talkers JSON list sent to an End-Point even if the End Point is talking. Hence the Active Talker JSON list sent to different End Points varies. In case you need to play youur own stream, please use the local stream.

How to manage UI Layout showing videos of multiple parties in a Session?

Note! Layout Manager is not included till latest release of EnableX. However, we aim to release a Layout Manager either as part of the Toolkit or as a stand-alone Library. As of now, Developer needs to take up the challenge to build and manage UI Layout to show Video Streams and Presentation Streams viz. Screen Share & Canvas Streaming.

Its a basic function of an RTC Application to show / play Local & Remote Audio / Video Streams. However, in a multi-party conference, it becomes difficult to manage multiple-videos of participants entering and leaving rooms with many other presentations going on / off at the same time.

In RTC Application, you publish your local streams for others to receive and play in Player. Similarly, you receive remote participants’ streams and play at your end. A stream may generally contain Audio or Video and it is playable in an Audio or Video player as applicable. Please note using EnxStream.play() method to play stream, the Toolkit decides whether to use Audio Player or Video Player depending on the availability of audio video track in the Stream.

Lets understand all different type of streams available in a Session and to play them in your UI and move them out of your UI as the participant leaves the session. You need to consider the following Streams to play:

Play Local Stream

A Local Stream is initiated locally using different source e.g.:

  • Microphone Device to add Audio track
  • Camera Device to add Video track
  • Screen Share to add Screen track
  • Canvas Element to add Canvas track

A locally initiated stream may be published into the room for others to receive and play. Please note that you will never receive loop-back stream from EnableX Server for your local stream initiated using Microphone & Camera. Therefore if you wish to play such local stream, you need to use local-stream Object to play locally.

localStream.play("DIV_ID_LOCAL_STREAM", localStreamOptions);

However, you will receive loop-back of your local stream initiated using Screen Share and Canvas Element. So, such streams can be played alike a remote stream after subscribing to them:

screenShareStream.play("DIV_ID_SCREEN", screenOptions);  
canvasStream.play("DIV_ID_CANVAS", canvasOptions);

Play Remote Audio/Video Streams

Audio/Video Streams of all remote Participants may be played with reference to Active Talker Implementation of EnableX. You will receive up to 6 Remote Participant’s Audio/Video Streams, decided by EnableX, depending on who are top 6 participants actively talking among others in the room. You will receive a event notification active-talkers-updated with an updated list of Active Talkers whenever there is a change in the Talkers’ list. You can only play streams given in the updated Talker List.

Know more about how to handle Active Talker Updates….

Play Remote Screen Share Stream

You will receive share-started event whenever there is a screen-share available to play. Similarly, you will receive share-stopped event when screen-share is stopped.

// Notification when share starts
room.addEventListener("share-started", function (event) {
     // Get Stream# 11 which carries Screen Share 
     var shared_stream = room.remoteStreams.get(11);  
     shared_stream.play("div_id", playerOptions); // Play in Player
});

// Notification when share stops, 
room.addEventListener("share-stopped", function (event) {
     // Handle UI here to remove stop playing and remove division
});

Know more about Screen Share…

Play Remote Canvas Stream

You will receive canvas-started event whenever there is a remote canvas stream available to play. Similarly, you will receive canvas-stopped event when canvas streaming is stopped.

// Notification, when canvas stream starts
room.addEventListener("canvas-started", function (event) {
        var canvasStream = room.remoteStreams.get(21);
        if(canvasStream.stream !== undefined) {
             canvasStream.play("PlayerDiv");
        }
 });

// Notification when canvas streaming stops, 
room.addEventListener("canvas-stopped", function (event) {
     // Handle UI here to remove stop playing and remove division
});

Know more on Canvas Streaming…

How to show Video Stream in a Video Player effectively?

A Video Resolution & Aspect Ratio of a Video Source; either of Video File or a Video Stream captured using a Camera; differs from Player’s Resolution Support and Aspect Ratio. This is where Video Scaling comes into play to display the Video with respect to Player’s Resolution Support and fill the video appropriately in available Player Area.

Video Resolution Scaling is handled by Players automatically. Player Area fitment with Video may be opted. The Replay Devices & Video Player work on many different type of Player Area Fitment modes. In HTML5, Video Player (<VIDEO> Tag) has 3 modes to choose from depending on the UI/Experience you look to have in your App; viz.

  • CONTAIN: It shows FULL VIDEO resized to fit in the Player maintaining the Aspect Ratio of the Video Source. Therefore, the player may have BLANK area either at TOP/BOTTOM or LEFT/RIGHT side of the Video. This Blank Area is generally shown with Black Background Color by the Player. E.g. Refer a 2.81:1 ratio Movie in a TV is shown with Black Padding at top and bottom of the Screen.
  • FILL: It shows FULL VIDEO covering the whole visible Player Area. The Player stretches the Video Source either vertically or horizontal to cover whole the Player Area. So, overall video replay appears distorted. E.g. Refer a 4:3 ratio TV Channel appears horizontally stretched in 16:9 HDTV.
  • COVER: It shows only the most important part of Video i.e. the Center Part of Video Frame that fits best within the Player Area. The Video is stretched, rather some of the Video Frame that overflows the Player Area gets hidden.

You may use “style” attribute to Video Player to define “object-fit” property to have desired UI/UX.

<video 
     controls="controls" height="360" width="600" 
     preload="auto" src="VIDEO-FILE.mp4" 
     style="object-fit: contain;"> 
</video>

How to fetch Recordings?

There are 2 types of Recording Files available with EnableX to fetch / download, viz.

  1. Individual Participants’ Stream Recording Files: Available only if you have subscribed to recording-service and that you have recorded your sessions.
  2. Transcoded Re-playable Session:  It’s a single composite video file created out of all the individual participant’s streams in the right playtime. Available only if you have subscribed to Recording & Transcoding Services and that you have recorded your sessions.

There are different ways to access Recordings. They are:

Fetch using Server API

Using Server API you can fetch a list of Recordings through a HTTP GET request to its archive route. You may use the following filters to get the right set recordings as desigred:

  • For a Period – For a given From and To date
  • For a Room ID
  • For a Room ID within a Period
  • For a specific Session – For a given Conference Number

After getting the Report, you may need to do HTTP GET to each of the URLs given in the Report to download files individually.

Please refer to online documentation for detailed explanation of API calls.

Get delivered to your FTP / SFTP Server

EnableX may deliver your Recording files to your FTP / SFTP Server.  FTP Delivery is carried out in batches; so you may need to wait for considerable amount of time to get deliverables on your Server.

You need to setup FTP Delivery with FTP Credentials through Portal. Follow the path:

Portal Login > My Accounts >  Preferences > Recording Delivery Management

Once configured, EnableX transfers your files and keep them organized in sub-folders in the folder you designated for delivery in your FTP Server. Sub-folders are named as:

FTP-Server/Designated-Folder/App-ID/Room-ID/Conference-Num/filesname.extn

Download through Portal

Login to your Portal to download Recorded media for selected Period. Follow the path:

Portal Login > Downloads (Left Bar) > Recordings > Download (Column of each Row)

Get notified through Notification URL

EnableX may notify you as soon as any Recording file is available for downloading by HTTP POST to a designated Notification URL. You need to setup the URL at your Application Server. To configure your Notification URL, follow the path:

Portal Login > My Applications (Left Bar) > Action (Column of Each Row) > Manage Application > Settings (Tab) > Notification URL (Input)

Once configured, EnableX does HTTP POST with JSON Raw Body to the URL for you to process. Note that you need to do HTTP GET to each of the URLs given in the JSON to download files individually. JSON Format given below:

// When a Recording files are ready 
{ 
	 “type”: “recording”,
	 “trans_date”: “Datetime”,  /* In UTC */
	 “app_id”: “String”,
	 “room_id”: “String”,
	 “conf_num”: “String”,
	 “recording”: [ 
		{ "url": "http://FQDN/path/file" }
	 ]
}

// When Transcoded Video files is ready
{ 
	 “type”: “transcoded”,
	 “trans_date”: “Datetime”,  /* In UTC */
	 “app_id”: “String”,
	 “room_id”: “String”,
	 “conf_num”: “String”,
	 “transcoded”: [ 
		{ "url": "http://FQDN/path/file" }
	 ]
}

How to get usage report, i.e. CDR (Call Detail Report)?

Call Detail Report or CDR offers a complete log of activities from the time when a user is connected to a room until he is disconnected. CDR helps you understand your application call usage for each session.

There are different ways through which you may get CDR, these are:

Fetch using Server API

Using Server API you can fetch CDR through a HTTP and get a request to its cdr route. You may use the following filters to get the right set of CDR required:

  • For a Period – For a given From and To date
  • For a Room ID
  • For a Room ID within a Period
  • For a specific Session – For a given Conference Number

Please refer to online documentation for a detailed explanation of API calls.

Download through Portal

Login to your Portal to download CDR for selected Period. You may further opt for the download format either in .CSV or .XLS. Follow the path:

Portal Login > Reports (Left Bar) > CDR > [Download]

Receive by Email

You may opt to receive CDR by email on a daily basis. Through your Portal, you may set up Email Delivery of CDR. Follow the path:

Portal Login > My Account (Left Bar) > Preferences > Notification Management > Transaction Notification (Section) > CDR (Checkbox) > [Save]

How we can control player size and toolbar?

You have greater control of the player using a JSON with configurable keys. Refer Player Options here.

To control the size of your Player, you need to create a Container <DIV> for your Player. And keep player option to fill the Container <DIV>. Thus by setting Player Options and Container <DIV> properties, you may have any desired view/behaviour for your player

Refer the Player Options JSON and stream playing example within a Container DIV given below:

var options = {
    player: {
        'height': '100%',
        'width': '100%',
        'minHeight': 'inherit',
        'minWidth': 'inherit'
    },
    toolbar: {
        displayMode: false,
        branding: {
            display: false
        }
    }
};

document.write("<DIV id='myPlayerDIV' class='myPlayerClass'></DIV>");

stream.play('myPlayerDIV', options);

How we can implement Chat Application?

Using EnxRoom.sendMessage() method and associated event listener message-received, you may build a text-chat application with different functionalities/objectives, viz.

  • Public Messaging: To send messages to all connected users.
  • Private Messaging: To send messages to a specific user.
  • Group Messaging: To send messages to more than one specified users.

Steps to follow:

  1. Get Token: You create a Token using Server API
  2. Connect & Join Room: Use the token to get connected to the Room.
  3. Know Available Participants: You get to know all participants available in the Room from the Room Meta Information you receive after connecting with room-connected event.
  4. Manage Participants: Participants may leave the Session or re-join and new participant may join too. Listen to user-connected and user-disconnected events to know the new user and existing user respectively to manage your available Participant List.
  5. Send Message: Use the EnxRoom.sendMessage() method to send messages to one participant , selected participants or all participants.
  6. Receive Message: Listen to message-received event to receive incoming message.

Know more on the Messaging API….