0

We have a file, sent from HTML form to CGI (perl) script. We know, that files came from HTML via CGI.PM are paged onto disk (as an alternative for RAM). And in this case I have a question: is it possible to read a file by chunks from that memory into CGI script (say, into array)? The code we have as simple as possible:

#!/usr/bin/perl

use strict;
use warnings;
use CGI;
use CGI::Carp qw ( fatalsToBrowser );

my $q = new CGI;

my @file = $q->upload("file"); # here I think must be some while(<>) {} for reading from N byte to X byte.

And if it is possible then I have a second sub-question: is it possible to read different chunks (say: fro the beginning from 0 to 500 bytes, then from 1000 to 1500, then from 501 to 999 bytes? Thanks!

Arsenii
  • 655
  • 1
  • 8
  • 20
  • When I see "CGI.PM", I find myself wondering if you are using VMS or DOS. – tjd Nov 04 '15 at 18:56
  • @tjd When I see your comment I find myself wondering what is VMS and DOS (unless you mean DOS as OS)? Wdn't you, please, kindly explain? Thanks! – Arsenii Nov 05 '15 at 09:39
  • VMS & DOS are two OSs that use strictly upper case file names. In most any other OS the file name would be "CGI.pm". – tjd Nov 05 '15 at 12:47

1 Answers1

1

Yes, by using read.

ikegami
  • 367,544
  • 15
  • 269
  • 518