Home > Net >  A more efficient way to transfer files over a socket in Dart?
A more efficient way to transfer files over a socket in Dart?

Time:11-21

I am trying to build a flutter app to send files to any device over my local network. I'm using sockets to send and recieve the data. The transfer is successful but the whole process is very resource(RAM) intensive(I believe because of the Uint8List and builder that is created on the server and client side.)

Because of it I can only send files of size 100-200 MB's without running out of memory. I was wondering whether it was possible to send data more efficiently by optimizing the code or any other method of file transfer altogether.

Server side code:

import 'dart:io';
import 'dart:typed_data';

void main() async {
  final server = await ServerSocket.bind('localhost', 2714);
  server.listen((client) {
    handleClient(client);
  });
}

void handleClient(Socket client) async {
  File file = File('1.zip');
  Uint8List bytes = await file.readAsBytesSync();
  print("Connection from:"
      "${client.remoteAddress.address}:${client.remotePort}");
  client.listen((Uint8List data) async {
    await Future.delayed(Duration(seconds: 1));
    final request = String.fromCharCodes(data);
    if (request == 'Send Data') {
      client.add(bytes);
    }
    client.close();
  });
}

Client side code:

import 'dart:io';
import 'dart:typed_data';

int size = 0;
void main() async {
  final socket = await Socket.connect('localhost', 2714);
  print("Connected to:" '${socket.remoteAddress.address}:${socket.remotePort}');
  socket.write('Send Data');
  await socket.listen((Uint8List data) async {
    await Future.delayed(Duration(seconds: 1));
    dataHandler(data);
    size = size   data.lengthInBytes;
    print(size);
    print("ok: data written");
  });
  await Future.delayed(Duration(seconds: 20));
  socket.close();
  socket.destroy();
}

BytesBuilder builder = new BytesBuilder(copy: false);
void dataHandler(Uint8List data) {
  builder.add(data);

  if (builder.length <= 1049497288) {
    // file size
    Uint8List dt = builder.toBytes();

    writeToFile(
        dt.buffer.asUint8List(0, dt.buffer.lengthInBytes), '1(recieved).zip');
  }
}

Future<void> writeToFile(Uint8List data, String path) {
  // final buffer = data.buffer;
  return new File(path).writeAsBytes(data);
}

I'm still very much new to Dart and Flutter, any help would be appreciated, thanks.

CodePudding user response:

@pskink gave you most of the answer, but you are right that you're running into issues because you are reading the entire file into memory on the server side, and then accumulating all the received bytes into memory before writing on the client side.

You can stream the data from disk to socket on the server side, and stream from socket to disk on the client side. Here's the full code:

import 'dart:io';
import 'dart:typed_data';

void main() async {
  final server = await ServerSocket.bind('localhost', 2714);
  server.listen((client) async {
      await File('1.zip').openRead().pipe(client);
  });
}
import 'dart:io';
import 'dart:typed_data';

void main() async {
  var socket = await Socket.connect('localhost', 2714);
  try {
    print(
        "Connected to:" '${socket.remoteAddress.address}:${socket.remotePort}');
    socket.write('Send Data');

    var file = File('1_received.zip').openWrite();
    try {
      await socket.map(toIntList).pipe(file);
    } finally {
      file.close();
    }
  } finally {
    socket.destroy();
  }
}

List<int> toIntList(Uint8List source) {
  return List.from(source);
}

File.openRead, File.openWrite and the socket are all asynchronous streams. See https://dart.dev/tutorials/language/streamshttps://dart.dev/tutorials/language/streams for more details.

The pipe calls are hiding all of the work here, but in other cases you might use the streams with the async for (var element in stream) statement.

There's one thing to keep in mind here, in await File('1.zip').openRead().pipe(client), there is no corresponding close method for openRead. The file handle is automatically closed once you reach the end of the stream. I've had some issues when reading many files in quick succession, even if each was read until the end before the next one was opened, resulting in "Too many open files" errors.

I didn't investigate further because I was reading small files and switched to using readAsBytesSync, but it's possible the handles are only closed during GC or there's some other reason why they're not immediately released, which puts a limit to the number of files you can read this way within a short time span.

  • Related