-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RTCAudioSource - Expected a .byteLength of 480, not 2770 #19
Comments
Due to limitations in the C++ APIs, Here's an example for what I've done to chunk the data properly: class AudioSource {
private leftoverSamples: Int16Array;
private numLeftoverSamples: number;
private numberOfSamplesPerFrame: number;
constructor(
private audioSource: nonstandard.RTCAudioSource,
public readonly samplerate: number,
) {
// The number of frames wanted by node-webrtc when exporting audio data,
// exactly 10ms at the given sample rate. This is used internally by
// libwebrtc and cannot be changed.
// (samples / 10ms) = (samples / s) / (1000ms / 1s) * 10
this.numberOfSamplesPerFrame = samplerate / 100;
this.leftoverSamples = new Int16Array(this.numberOfSamplesPerFrame);
this.numLeftoverSamples = 0;
}
public presentSamples(samples: Int16Array) {
// Procedure:
// 1. Fill up leftoverFrames with enough data to make a complete frame
// 2. Send out as many complete frames as possible
// 3. Put all remaining data into leftoverFrames
let chunkStart = 0;
while (chunkStart < samples.length) {
const wantedNumberOfSamples = this.numberOfSamplesPerFrame - this.numLeftoverSamples;
const remainingSamples = samples.length - chunkStart;
if (remainingSamples < wantedNumberOfSamples) {
this.leftoverSamples.set(samples.slice(chunkStart));
this.numLeftoverSamples = remainingSamples;
break;
}
let chunk = samples.slice(chunkStart, chunkStart + wantedNumberOfSamples);
if (this.numLeftoverSamples) {
this.leftoverSamples.set(chunk, this.numLeftoverSamples);
chunk = this.leftoverSamples;
this.numLeftoverSamples = 0;
}
this.audioSource.onData({
samples: chunk,
numberOfFrames: this.numberOfSamplesPerFrame,
sampleRate: this.samplerate,
});
chunkStart += wantedNumberOfSamples;
}
}
} Sorry it's so complicated... |
Thanks for the quick response. No errors are thrown now, but when trying to play this track in the browser peer, I can't hear any audio being played. I made sure that the audio and track are enabled and not muted, the audiocontext is not paused, etc. I suspect there's some issue with the audio track created by RTCAudioSource, how would you go about debugging this? backend import { nonstandard } from '@roamhq/wrtc';
export class RTCAudioSourceWrapper {
private audioSource: nonstandard.RTCAudioSource;
private leftoverSamples: Int16Array;
private numLeftoverSamples: number;
private numberOfSamplesPerFrame: number;
constructor(public readonly sampleRate: number) {
// The number of frames wanted by node-webrtc when exporting audio data,
// exactly 10ms at the given sample rate. This is used internally by
// libwebrtc and cannot be changed.
// (samples / 10ms) = (samples / s) / (1000ms / 1s) * 10
this.audioSource = new nonstandard.RTCAudioSource();
this.numberOfSamplesPerFrame = sampleRate / 100;
this.leftoverSamples = new Int16Array(this.numberOfSamplesPerFrame);
this.numLeftoverSamples = 0;
}
onData(buffer: Buffer) {
// Procedure:
// 1. Fill up leftoverFrames with enough data to make a complete frame
// 2. Send out as many complete frames as possible
// 3. Put all remaining data into leftoverFrames
const samples: Int16Array = new Int16Array(
buffer.buffer,
buffer.byteOffset,
buffer.byteLength / Int16Array.BYTES_PER_ELEMENT,
);
let chunkStart = 0;
while (chunkStart < samples.length) {
const wantedNumberOfSamples =
this.numberOfSamplesPerFrame - this.numLeftoverSamples;
const remainingSamples = samples.length - chunkStart;
if (remainingSamples < wantedNumberOfSamples) {
this.leftoverSamples.set(samples.slice(chunkStart));
this.numLeftoverSamples = remainingSamples;
break;
}
let chunk = samples.slice(chunkStart, chunkStart + wantedNumberOfSamples);
if (this.numLeftoverSamples) {
this.leftoverSamples.set(chunk, this.numLeftoverSamples);
chunk = this.leftoverSamples;
this.numLeftoverSamples = 0;
}
this.audioSource.onData({
samples: chunk,
numberOfFrames: this.numberOfSamplesPerFrame,
sampleRate: this.sampleRate,
});
chunkStart += wantedNumberOfSamples;
}
}
createTrack(): MediaStreamTrack {
return this.audioSource.createTrack();
}
} // const audioSource: RTCAudioSourceWrapper = ...
const track = audioSource.createTrack();
const mediaStream = new MediaStream();
mediaStream.addTrack(track);
peerConnection.addTrack(track, mediaStream);
// sdp stuff frontend peerConnection.ontrack = ({ streams: [stream] }) => {
const audio = new Audio();
audio.srcObject = stream;
audio.play().catch((error) => {
console.error('Error playing audio:', error);
});
}; P.S. why not add this wrapper to the nonstandard lib if there's a workaround to the internal limitation? |
A couple debugging steps I can think of off the top of my head:
Other than that I don't see anything immediately wrong with your code, sorry
I should do this... just haven't gotten around to it since I originally wrote that wrapper for another application |
Sorry for the late response, crazy weeks.
I'm running on
By the way, RTCPeerConnection onconnectionstatechange and oniceconnectionstatechange weren't called on the frontend javascript for some reason, I could only see the state changes in webrtc internals. Any idea why?
Here's the webrtc internals dump:
|
hm well that does not seem good! The browser's WebRTC seems to think it's connected (according to your dump), but if that In any case! If you haven't already, I would next try to get two browsers to talk to each other over WebRTC in order to make sure your negotiation is working, and then see if it's really the server node-webrtc having trouble. |
I've tried 2 browsers as you suggested, it's still unable to connect as well for some reason, no idea what I'm missing. In the example below I'm passing a media stream only from client 1 to client 2, but passing media streams from both clients practically has the same result. Here are the browser logs + webrtc internals dump: Backend - NestJS + Socket.io - a naive solution assuming only 2 open sockets, passing offer and answer to the socket that didn't emit the event: @SocketAuth()
@SubscribeMessage('localoffer')
async onSendLocalOffer(
@ConnectedSocket() client: ClientSocket,
@MessageBody() offer: RTCSessionDescriptionInit,
) {
this.clients.forEach((innerClient) => {
if (innerClient.id !== client.id) {
innerClient.emit('localoffer', offer);
console.log(
'local offer',
{ from: client.id, to: innerClient.id },
offer,
);
}
});
}
@SocketAuth()
@SubscribeMessage('localanswer')
async onSendLocalAnswer(
@ConnectedSocket() client: ClientSocket,
@MessageBody() answer: RTCSessionDescriptionInit,
) {
this.clients.forEach((innerClient) => {
if (innerClient.id !== client.id) {
innerClient.emit('localanswer', answer);
console.log(
'local answer',
{ from: client.id, to: innerClient.id },
answer,
);
}
});
} Frontend - React - sendLocalOffer() is called by a button click in client 1 after sockets have been created for both client 1 and client 2: const createPeerConnection = () => {
const ICE_SERVERS: { urls: string }[] = [
{ urls: 'stun:stun.l.google.com:19302' },
{ urls: 'stun:stun1.l.google.com:19302' },
{ urls: 'stun:stun2.l.google.com:19302' },
{ urls: 'stun:stun3.l.google.com:19302' },
{ urls: 'stun:stun4.l.google.com:19302' },
];
const result = new RTCPeerConnection({ iceServers: ICE_SERVERS });
result.ontrack = ({ streams: [stream] }) => {
console.log('ontrack', stream, 'tracks', stream.getAudioTracks());
const audio = new Audio();
audio.srcObject = stream;
audio.play().catch((error) => {
console.error('Error playing audio:', error);
});
};
result.onconnectionstatechange = () => {
console.log('Connection state change:', result.connectionState);
};
result.oniceconnectionstatechange = () => {
console.log('ICE connection state change:', result.iceConnectionState);
};
result.onicecandidate = async (event: RTCPeerConnectionIceEvent) => {
if (!event.candidate) return;
try {
await result.addIceCandidate(event.candidate);
console.log('ICE candidate addded', event.candidate.candidate);
} catch (e) {
// console.error('Error adding ICE candidate:', event.candidate.candidate);
}
};
result.onicecandidateerror = (event) => {
// console.error('ICE candidate error:', event);
};
return result;
};
// client 1
const sendLocalOffer = async () => {
peerConnection.current = createPeerConnection();
const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
stream.getTracks().forEach((track) => {
peerConnection.current!.addTrack(track, stream);
});
const offer = await peerConnection.current.createOffer();
await peerConnection.current.setLocalDescription(offer);
console.log('send local offer', offer);
socket.current?.emit('localoffer', offer);
};
// client 2
const receiveLocalOffer = async (offer: RTCSessionDescriptionInit) => {
peerConnection.current = createPeerConnection();
console.log('receive local offer', offer);
await peerConnection.current!.setRemoteDescription(offer);
const answer = await peerConnection.current!.createAnswer();
await peerConnection.current!.setLocalDescription(answer);
console.log('send local answer', answer);
socket.current?.emit('localanswer', answer);
};
// client 1
const receiveLocalAnswer = async (answer: RTCSessionDescriptionInit) => {
console.log('receive local answer', answer);
await peerConnection.current!.setRemoteDescription(answer);
}; |
In your webrtc dump I see the following:
If both browsers are on the same machine, I think you shouldn't need any STUN servers, maybe worth removing those and testing again. |
Just tried it, no luck: |
You likely need to extend your signaling channel to send ice candidates between clients as well. See https://developer.mozilla.org/en-US/docs/Web/API/WebRTC_API/Perfect_negotiation for more details. Note that you will have to use the "old api" versions it lists on the server eventually. |
Is it because both clients are technically on the same host? Aren't the ice candidates properly added right now on both clients? |
The dump seems to show only one of the peers having candidates (though yes the logs are different). However, your existing
Currently, your code is just logging and adding locally, when what it should be doing is sending the candidate for the peer to add. |
Oh got it, you're right :) I'm now able to get to the connected state in the 2 browsers test, as well as in the original node-webrtc code. Not sure I understood - why would I eventually have to use the "old api" versions? Upon connection, I don't hear anything in the browser. The stream received is active, and the track in it is enabled and not muted, with a readyState "live". I'm trying to play the audio stream in the The webrtc audio debug input contains a sort of low "metalic" hum, and the audio debug output is completely silent. |
The version of WebRTC in node-webrtc is older than what's found in browsers so I am not 100% sure it has the behavior of these new APIs. It might, though!
Not sure! This dump is from the receiving end and indeed it shows no packets being received. What about the sending end? |
Got it, hopefully it's fine as is. No time to update node-webrtc? :)
It seems to work when both ends are browsers, but it might be worth to double check the logs in case I'm missing anything: How can I check the sent packets when node-webrtc is the sending end? So far I've been relying on webrtc internals. |
Logs seem fine. Unfortunately we don't have webrtc-internals for node-webrtc, I've usually relied on console log statements for debugging, not sure why packets still aren't being sent from it sorry. |
Hmm got it. Please let me know if any ideas come up. Thanks for your help on this so far! |
Realizing #13 could be related actually. Not that I've particularly made too much progress on that front... |
I'm trying to have my nodejs backend act as a peer that streams pcm audio to the browser. (let's assume it's the only way to achieve what I need)
I'm creating an RTCAudioSource, and adding to it an array of chunked buffers - PCM 24k sample rate signed 16-bit little-endian.
RTCAudioSource/onData only accepts an Int16Array in samples, so I tried converting the buffers:
source.onData() throws the following error, with x being anywhere from 17 to 2770 depending on the chunk size:
It's worth noting that the PCM buffers play fine when writing them to a wav file or when emitting and playing them in the browser via web sockets.
The text was updated successfully, but these errors were encountered: