Home     |     .Net Programming    |     cSharp Home    |     Sql Server Home    |     Javascript / Client Side Development     |     Ajax Programming

Ruby on Rails Development     |     Perl Programming     |     C Programming Language     |     C++ Programming     |     IT Jobs

Python Programming Language     |     Laptop Suggestions?    |     TCL Scripting     |     Fortran Programming     |     Scheme Programming Language


 
 
Cervo Technologies
The Right Source to Outsource

MS Dynamics CRM 3.0

Python Programming Language

writing to a file


as i understand there are two ways to write data to a file: using
f.write("foo") and print >>f, "foo".
what i want to know is which one is faster (if there is any difference
in speed) since i'm working with very large files. of course, if there
is any other way to write data to a file, i'd love to hear about it

montyphy@gmail.com wrote:
> as i understand there are two ways to write data to a file: using
> f.write("foo") and print >>f, "foo".

well print will add a '\n' or ' ' if you use ',' after it

> what i want to know is which one is faster (if there is any difference

there shouldn't be any noticable difference

> in speed) since i'm working with very large files. of course, if there
> is any other way to write data to a file, i'd love to hear about it

other ways:
os.system('cat file1 >> file2')
or subprocess.Popen
or print but sys.stdout = f
or ctypes + printf/fputs/..

and probably there are other obscure ways, but the intended way is
obviously f.write

nsz

montyphy@gmail.com schrieb:

> as i understand there are two ways to write data to a file: using
> f.write("foo") and print >>f, "foo".
> what i want to know is which one is faster (if there is any difference
> in speed) since i'm working with very large files. of course, if there
> is any other way to write data to a file, i'd love to hear about it

You should look at the mmap-module.

Diez

On May 30, 1:41 pm, "Diez B. Roggisch" <d@nospam.web.de> wrote:

> montyphy@gmail.com schrieb:
> > what i want to know is which one is faster (if there is any difference
> > in speed) since i'm working with very large files. of course, if there
> > is any other way to write data to a file, i'd love to hear about it

> You should look at the mmap-module.

Yes, memory mappings can be more efficient than files accessed using
file descriptors. But mmap does not take an offset parameter, and is
therefore not suited for working with large files. For example you
only have a virtual memory space of 4 GiB on a 32 bit system, so there
is no way mmap can access the last 4 GiB of an 8 GiB file on a 32 bit
system. If mmap took an offset parameter, this would not be a problem.

However, numpy has a properly working memory mapped array class,
numpy.memmap. It can be used for fast file access. Numpy also has a
wide range of datatypes that are efficient for working with binary
data (e.g. an uint8 type for bytes), and a record array for working
with structured binary data. This makes numpy very attractive when
working with binary data files.

Get the latest numpy here: www.scipy.org.

Let us say you want to memory map an 23 bit RGB image of 640 x 480
pixels, located at an offset of 4096 bytes into the file 'myfile.dat'.
Here is how numpy could do it:

import numpy

byte = numpy.uint8
desc = numpy.dtype({'names':['r','g','b'],'formats':[byte,byte,byte]})
mm = numpy.memmap('myfile.dat', dtype=desc, offset=4096,
shape=(480,640), order='C')
red = mm['r']
green = mm['g']
blue = mm['b']

Now you can access the RGB values simply by slicing the arrays red,
green, and blue. To set the R value of every other horizontal line to
0, you could simply write

red[::2,:] = 0

As always when working with memory mapped files, the changes are not
committed before the memory mapping is synchronized with the file
system. Thus, call

mm.sync()

when you want the actual write process to start.

The memory mapping will be closed when it is garbage collected
(typically when the reference count falls to zero) or when you call
mm.close().

On May 30, 4:53 pm, sturlamolden <sturlamol@yahoo.no> wrote:

> import numpy

> byte = numpy.uint8
> desc = numpy.dtype({'names':['r','g','b'],'formats':[byte,byte,byte]})
> mm = numpy.memmap('myfile.dat', dtype=desc, offset=4096,
> shape=(480,640), order='C')
> red = mm['r']
> green = mm['g']
> blue = mm['b']

An other thing you may commonly want to do is coverting between numpy
uint8 arrays and raw strings. This is done using the methods
numpy.fromstring and numpy.tostring.

# reading from file to raw string
rstr = mm.tostring()

# writing raw string to file
mm[:] = numpy.fromstring(rstr, dtype=numpy.uint8)
mm.sync()

On May 30, 4:53 pm, sturlamolden <sturlamol@yahoo.no> wrote:

> However, numpy has a properly working memory mapped array class,
> numpy.memmap.

It seems that NumPy's memmap uses a buffer from mmap, which makes both
of them defunct for large files. Damn.

mmap must be fixed.

Add to del.icio.us | Digg this | Stumble it | Powered by Megasolutions Inc