9

Building a React Native Live Video Broadcasting App using Agora

 2 years ago
source link: https://dev.to/ekaansharora/building-a-react-native-live-video-broadcasting-app-using-agora-1gff
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client
Cover image for Building a React Native Live Video Broadcasting App using Agora
Ekaansh Arora

Posted on Nov 2

• Originally published at agora.io

Building a React Native Live Video Broadcasting App using Agora

Live video broadcasting has seen a range of uses from live shopping to live concerts. There are a lot of aspects to building a scalable, high-quality, live video streaming app. For example, maintaining low latency, load balancing, and managing thousands of users in the audience can be stressful while also maintaining cross-platform compatibility.

There’s a really easy way to make this happen using the Agora React Native SDK. In this article, we’ll build a live broadcasting app that can have multiple broadcasters and host thousands of users by using the magic of the Agora Video SDK. We’ll go over the structure, setup, and execution of the app before diving into how it works. You can get a live broadcast going in a few simple steps within a matter of minutes.

We’ll be using the Agora RTC SDK for React Native for the example below. I’m using v3.4.6 at the time of writing.

Creating an Agora account

Create an account [https://sso.agora.io/en/signup?utm_source=medium&utm_medium=blog&utm_campaign=building-a-react-native-live-video-broadcasting-app-using-agora) and log in to the dashboard. You can follow this guide for reference: https://www.agora.io/en/blog/how-to-get-started-with-agora

Navigate to the Project List tab under the Project Management tab, and create a new project by clicking the blue Create button.
Create a new project and retrieve the App ID. If you select App ID with a token, obtain a temporary token as well for your project. You can find a link to generate temporary tokens on the edit page. The temporary token will be used to authorize your requests while you’re developing the application.

Note: Token authentication is recommended for all RTE apps running in production environments. For more information about token-based authentication in the Agora platform, see this guide: https://docs.agora.io/en/Video/token?platform=All%20Platforms

Structure of our example

This is the structure of our application:

.
├── android
├── components
│ └── Permission.ts
│ └── Style.ts
├── ios
├── App.tsx
├── index.js
.
Enter fullscreen modeExit fullscreen mode

Let’s run the app

You’ll need to have the LTS version of Node.js and NPM installed.

  • Make sure you’ve registered an Agora account, set up a project, and generated an App ID (and temporary token).
  • Download and extract the ZIP file from the master branch.
  • Run npm install to install the app dependencies in the unzipped directory.
  • Navigate to ./App.tsx and enter the App ID that we obtained from the Agora Console (appId: ‘<YourAppIDHere>’). If you’re using tokens, enter your token and channel name as well.
  • If you’re building for iOS, open a terminal and execute cd ios && pod install. You can then open ios/<projectName>.xcworkspace file to open your project in XCode and build the app. (The iOS simulator does not support the camera. Use a physical device instead.)
  • If you’re building for Android connect your device and execute npm run android to start the app. Wait for a few minutes for the app to build.
  • Once you see the home screen on your mobile or emulator, click the Start Call button on the device.

That’s it. You should have a video call going between the two devices. The app uses test as the channel name.

Getting to how it works

Permission.ts

import {PermissionsAndroid} from 'react-native'

/** * @name requestCameraAndAudioPermission * @description Function to request permission for Audio and Camera */ export default async function requestCameraAndAudioPermission() { try { const granted = await PermissionsAndroid.requestMultiple([ PermissionsAndroid.PERMISSIONS.CAMERA, PermissionsAndroid.PERMISSIONS.RECORD_AUDIO, ]) if ( granted['android.permission.RECORD_AUDIO'] === PermissionsAndroid.RESULTS.GRANTED && granted['android.permission.CAMERA'] === PermissionsAndroid.RESULTS.GRANTED ) { console.log('You can use the cameras & mic') } else { console.log('Permission denied') } } catch (err) { console.warn(err) } }

We’re exporting a function to request camera and microphone permissions from the OS on Android.

App.tsx

The App.tsx file contains the core logic of our video call.

import React, { Component } from 'react'; import { Platform, ScrollView, Text, TouchableOpacity, View, } from 'react-native'; import RtcEngine, { RtcLocalView, RtcRemoteView, VideoRenderMode, ClientRole, ChannelProfile, } from 'react-native-agora';

import requestCameraAndAudioPermission from './components/Permission'; import styles from './components/Style';

/** * @property appId Agora App ID * @property token Token for the channel; * @property channelName Channel Name for the current session */ const token = null; const appId = '<Agora App ID>'; const channelName = 'test'; /** * @property isHost Boolean value to select between broadcaster and audience * @property joinSucceed State variable for storing success * @property peerIds Array for storing connected peers */ interface State { isHost: boolean; joinSucceed: boolean; peerIds: number[]; } ...

We start by writing the import statements. Next, we have some constants for our App ID, token, and channel name.

We define an interface for our application state containing isHost (a Boolean value to switch between audience and broadcaster; a host can both send and receive streams, whereas an audience can only receive streams), joinSucceed (a Boolean value to store if we’ve connected successfully), and peerIds (an array to store the UIDs of other users in the channel).

... export default class App extends Component<null, State> { _engine?: RtcEngine;

constructor(props) { super(props); this.state = { isHost: true, joinSucceed: false, peerIds: [], }; if (Platform.OS === 'android') { // Request required permissions from Android requestCameraAndAudioPermission().then(() => { console.log('requested!'); }); } }

componentDidMount() { this.init(); }

/** * @name init * @description Function to initialize the Rtc Engine, attach event listeners and actions */ init = async () => { this._engine = await RtcEngine.create(appId); await this._engine.enableVideo(); await this._engine?.setChannelProfile(ChannelProfile.LiveBroadcasting); await this._engine?.setClientRole( this.state.isHost ? ClientRole.Broadcaster : ClientRole.Audience );

this._engine.addListener('Warning', (warn) => { console.log('Warning', warn); });

this._engine.addListener('Error', (err) => { console.log('Error', err); });

this._engine.addListener('UserJoined', (uid, elapsed) => { console.log('UserJoined', uid, elapsed); // Get current peer IDs const { peerIds } = this.state; // If new user if (peerIds.indexOf(uid) === -1) { this.setState({ // Add peer ID to state array peerIds: [...peerIds, uid], }); } });

this._engine.addListener('UserOffline', (uid, reason) => { console.log('UserOffline', uid, reason); const { peerIds } = this.state; this.setState({ // Remove peer ID from state array peerIds: peerIds.filter((id) => id !== uid), }); });

// If Local user joins RTC channel this._engine.addListener('JoinChannelSuccess', (channel, uid, elapsed) => { console.log('JoinChannelSuccess', channel, uid, elapsed); // Set state variable to true this.setState({ joinSucceed: true, }); }); }; ...

We define a class-based component, the _engine variable, which will store the instance of the RtcEngine class, which provides methods that can be invoked by our application to manage the live stream.

In the constructor, we set our state variables and request permission for the camera and the mic on Android. When the component is mounted, we call the init function, which initializes the RTC engine using the App ID. It also enables the video by calling the enableVideo method on our engine instance.

We set channelProfile as Live Broadcasting and clientRole based on our isHost state variable value.
The init function also adds event listeners for various events in the live broadcast. For example, the UserJoined event gives us the UID of a user when they join the channel. We store this UID in our state.

(If there are users connected to the channel before we joined, a UserJoined event is fired for each user after they successfully join the channel.)

... /** * @name toggleRoll * @description Function to toggle the roll between broadcaster and audience */ toggleRoll = async () => { // Join Channel using null token and channel name this.setState( { isHost: !this.state.isHost, }, async () => { await this._engine?.setClientRole( this.state.isHost ? ClientRole.Broadcaster : ClientRole.Audience ); } ); };

/** * @name startCall * @description Function to start the call */ startCall = async () => { // Join Channel using null token and channel name await this._engine?.joinChannel(token, channelName, null, 0); };

/** * @name endCall * @description Function to end the call */ endCall = async () => { await this._engine?.leaveChannel(); this.setState({ peerIds: [], joinSucceed: false }); }; ...

Next, we have the function toggleRole, which changes roles between audience and broadcaster. We have startCall and endCall to start and end the call. The toggleRole function updates the state and calls the setClientRole function with a role argument based on the state. The joinChannel method takes in a token, channel name, optional info, and an optional UID. (If you set UID to 0, the SDK automatically assigns a UID.)

... render() { return ( <View style={styles.max}> <View style={styles.max}> <Text style={styles.roleText}> You're {this.state.isHost ? 'a broadcaster' : 'the audience'} </Text> <View style={styles.buttonHolder}> <TouchableOpacity onPress={this.toggleRoll} style={styles.button}> <Text style={styles.buttonText}> Toggle Role </Text> </TouchableOpacity> <TouchableOpacity onPress={this.startCall} style={styles.button}> <Text style={styles.buttonText}> Start Call </Text> </TouchableOpacity> <TouchableOpacity onPress={this.endCall} style={styles.button}> <Text style={styles.buttonText}> End Call </Text> </TouchableOpacity> </View> {this._renderVideos()} </View> </View> ); }

_renderVideos = () => { const { joinSucceed } = this.state; return joinSucceed ? ( <View style={styles.fullView}> {this.state.isHost ? ( <RtcLocalView.SurfaceView style={styles.max} channelId={channelName} renderMode={VideoRenderMode.Hidden} /> ) : ( <></> )} {this._renderRemoteVideos()} </View> ) : null; };

_renderRemoteVideos = () => { const { peerIds } = this.state; return ( <ScrollView style={styles.remoteContainer} contentContainerStyle={styles.remoteContainerContent} horizontal={true} > {peerIds.map((value) => { return ( <RtcRemoteView.SurfaceView style={styles.remote} uid={value} channelId={channelName} renderMode={VideoRenderMode.Hidden} zOrderMediaOverlay={true} /> ); })} </ScrollView> ); }; } ...

We define the render function for displaying buttons to start and end the call and to display our local video feed as well as the remote users’ video feeds. We define the _renderVideos function, which renders our video feeds.

To display the local user’s video feed, we use the component, which takes in channelId and renderMode (which can be used to fit the video inside a view or zoom to fill the view) as props. To display the remote user’s video feed, we use the component from the SDK, which takes in the UID of the remote user along with channelId and renderMode. We map over the Remote users’ UIDs to display a video for each, using the peerIDs array.

Style.ts

import { Dimensions, StyleSheet } from 'react-native';

const dimensions = { width: Dimensions.get('window').width, height: Dimensions.get('window').height, };

export default StyleSheet.create({ max: { flex: 1, }, buttonHolder: { height: 100, alignItems: 'center', flex: 1, flexDirection: 'row', justifyContent: 'space-evenly', }, button: { paddingHorizontal: 20, paddingVertical: 10, backgroundColor: '#0093E9', borderRadius: 25, }, buttonText: { color: '#fff', }, fullView: { width: dimensions.width, height: dimensions.height - 100, }, remoteContainer: { width: '100%', height: 150, position: 'absolute', top: 5, }, remote: { width: 150, height: 150, marginHorizontal: 2.5, }, noUserText: { paddingHorizontal: 10, paddingVertical: 5, color: '#0093E9', }, roleText: { textAlign: 'center', fontWeight: '700', fontSize: 18, }, });

The Style.ts file contains the styling for the components.

Conclusion

That’s how easy it is to build a live video broadcasting app. You can refer to the Agora React Native API Reference to see methods that can help you quickly add features like muting the camera and mic, setting video profiles, audio mixing, and much more.

If you’re deploying your app to production, you can read more about how to use tokens in this blog.

I invite you to join the Agora Developer Slack community. Feel free to ask any React Native questions in the #react-native-help-me channel.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK