EzDevInfo.com

cython

A Python to C compiler Cython: C-Extensions for Python

Wrapping a C library in Python: C, Cython or ctypes?

I want to call a C library from a Python application. I don't want to wrap the whole API, only the functions and datatypes that are relevant to my case. As I see it, I have three choices:

  1. Create an actual extension module in C. Probably overkill, and I'd also like to avoid the overhead of learning extension writing.
  2. Use Cython to expose the relevant parts from the C library to Python.
  3. Do the whole thing in Python, using ctypes to communicate with the external library.

I'm not sure whether 2) or 3) is the better choice. The advantage of 3) is that ctypes is part of the standard library, and the resulting code would be pure Python – although I'm not sure how big that advantage actually is.

Are there more advantages / disadvantages with either choice? Which approach do you recommend?


Edit: Thanks for all your answers, they provide a good resource for anyone looking to do something similar. The decision, of course, is still to be made for the single case—there's no one "This is the right thing" sort of answer. For my own case, I'll probably go with ctypes, but I'm also looking forward to trying out Cython in some other project.

With there being no single true answer, accepting one is somewhat arbitrary; I chose FogleBird's answer as it provides some good insight into ctypes and it currently also is the highest-voted answer. However, I suggest to read all the answers to get a good overview.

Thanks again.


Source: (StackOverflow)

Compiling with cython and mingw produces gcc: error: unrecognized command line option '-mno-cygwin'

I'm trying to compile a python extension with cython in win 7 64-bit using mingw (64-bit).
I'm working with Python 2.6 (Active Python 2.6.6) and with the adequate distutils.cfg file (setting mingw as the compiler)

When executing

> C:\Python26\programas\Cython>python setup.py build_ext --inplace

I get an error saying that gcc has not an -mno-cygwin option:

> C:\Python26\programas\Cython>python setup.py build_ext --inplace
running build_ext
skipping 'hello2.c' Cython extension (up-to-date)
building 'hello2' extension
C:\mingw\bin\gcc.exe -mno-cygwin -mdll -O -Wall -IC:\Python26\include -IC:\Python26\PC -c hello2.c -o build\temp.win-amd64-2.6\Release\hello2.o
gcc: error: unrecognized command line option '-mno-cygwin'
error: command 'gcc' failed with exit status 1

gcc is:

C:\>gcc --version
gcc (GCC) 4.7.0 20110430 (experimental)
Copyright (C) 2011 Free Software Foundation, Inc.

How could I fix it?


Source: (StackOverflow)

Advertisements

How should I structure a Python package that contains Cython code

I'd like to make a Python package containing some Cython code. I've got the the Cython code working nicely. However, now I want to know how best to package it.

For most people who just want to install the package, I'd like to include the .c file that Cython creates, and arrange for setup.py to compile that to produce the module. Then the user doesn't need Cython installed in order to install the package.

But for people who may want to modify the package, I'd also like to provide the Cython .pyx files, and somehow also allow for setup.py to build them using Cython (so those users would need Cython installed).

How should I structure the files in the package to cater for both these scenarios?

The Cython documentation gives a little guidance. But it doesn't say how to make a single setup.py that handles both the with/without Cython cases.


Source: (StackOverflow)

Can Cython compile to an EXE?

I know what Cythons purpose is. It's to write compilable C extensions in a Python-like language in order to produce speedups in your code. What I would like to know (and can't seem to find using my google-fu) is if Cython can somehow compile into an executable format since it already seems to break python code down into C.

I already use Py2Exe, which is just a packager, but am interested in using this to compile down to something that is a little harder to unpack (Anything packed using Py2EXE can basically just be extracted using 7zip which I do not want)

It seems if this is not possible my next alternative would just be to compile all my code and load it as a module and then package that using py2exe at least getting most of my code into compiled form, right?


Source: (StackOverflow)

Distributing Cython based extensions using LAPACK

I am writing a Python module that includes Cython extensions and uses LAPACK (and BLAS). I am open to using either clapack or lapacke, or some kind of f2c or f2py solution if necessary. What is important is that I am able to call lapack and blas routines from Cython in tight loops without Python call overhead.

I've found one example here. However, that example depends on SAGE. I want my module to be installable anywhere numpy is installed. What is the best combination of interfaces to use, and what would the minimal setup.py file look like that could fetch the necessary information from numpy for compilation?

EDIT: Here is what I ended up doing. It works on my macbook, but I have no idea how portable it is. Surely there's a better way.

from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
import numpy
from Cython.Build import cythonize
from numpy.distutils.system_info import get_info

# TODO: This cannot be the right way
blas_include = get_info('blas_opt')['extra_compile_args'][1][2:]
includes = [blas_include,numpy.get_include()]

setup(
    cmdclass = {'build_ext': build_ext},
    ext_modules = cythonize([Extension("cylapack", ["cylapack.pyx"],
                                       include_dirs = includes,
                                       libraries=['blas','lapack'])
                   ])
)

This works because, on my macbook, the clapack.h header file is in the same directory as cblas.h. I can then do this in my pyx file:

ctypedef np.int32_t integer

cdef extern from "cblas.h":
    double cblas_dnrm2(int N,double *X, int incX)
cdef extern from "clapack.h":
    integer dgelsy_(integer *m, integer *n, integer *nrhs, 
    double *a, integer *lda, double *b, integer *ldb, integer *
    jpvt, double *rcond, integer *rank, double *work, integer *
    lwork, integer *info)

Source: (StackOverflow)

Share Large, Read-Only Numpy Array Between Multiprocessing Processes

I have a 60GB SciPy Array (Matrix) I must share between 5+ multiprocessing Process objects. I've seen numpy-sharedmem and read this discussion on the SciPy list. There seem to be two approaches--numpy-sharedmem and using a multiprocessing.RawArray() and mapping NumPy dtypes to ctypes. Now, numpy-sharedmem seems to be the way to go, but I've yet to see a good reference example. I don't need any kind of locks, since the array (actually a matrix) will be read-only. Now, due to its size, I'd like to avoid a copy. It sounds like the correct method is to create the only copy of the array as a sharedmem array, and then pass it to the Process objects? A couple of specific questions:

  1. What's the best way to actually pass the sharedmem handles to sub-Process()es? Do I need a queue just to pass one array around? Would a pipe be better? Can I just pass it as an argument to the Process() subclass's init (where I'm assuming it's pickled)?

  2. In the discussion I linked above, there's mention of numpy-sharedmem not being 64bit-safe? I'm definitely using some structures that aren't 32-bit addressable.

  3. Are there tradeoff's to the RawArray() approach? Slower, buggier?

  4. Do I need any ctype-to-dtype mapping for the numpy-sharedmem method?

  5. Does anyone have an example of some OpenSource code doing this? I'm a very hands-on learned and it's hard to get this working without any kind of good example to look at.

If there's any additional info I can provide to help clarify this for others, please comment and I'll add. Thanks!

This needs to run on Ubuntu Linux and Maybe Mac OS, but portability isn't a huge concern.


Source: (StackOverflow)

Are there advantages to use the Python/C interface instead of Cython?

I want to extend python and numpy by writing some modules in C or C++, using BLAS and LAPACK. I also want to be able to distribute the code as standalone C/C++ libraries. I would like this libraries to use both single and double precision float. Some examples of functions I will write are conjugate gradient for solving linear systems or accelerated first order methods. Some functions will need to call a Python function from the C/C++ code.

After playing a little with the Python/C API and the Numpy/C API, I discovered that many people advocate the use of Cython instead (see for example this question or this one). I am not an expert about Cython, but it seems that for some cases, you still need to use the Numpy/C API and know how it works. Given the fact that I already have (some little) knowledge about the Python/C API and none about Cython, I was wondering if it makes sense to keep on using the Python/C API, and if using this API has some advantages over Cython. In the future, I will certainly develop some stuff not involving numerical computing, so this question is not only about numpy. One of the thing I like about the Python/C API is the fact that I learn some stuff about how the Python interpreter is working.

Thanks.


Source: (StackOverflow)

Cython: "fatal error: numpy/arrayobject.h: No such file or directory"

I'm trying to speed up the answer here using Cython. I try to compile the code (after doing the cygwinccompiler.py hack explained here), but get a fatal error: numpy/arrayobject.h: No such file or directory...compilation terminated error. Can anyone tell me if it's a problem with my code, or some esoteric subtlety with Cython?

Below is my code. Thanks in advance:

import numpy as np
import scipy as sp
cimport numpy as np
cimport cython

cdef inline np.ndarray[np.int, ndim=1] fbincount(np.ndarray[np.int_t, ndim=1] x):
    cdef int m = np.amax(x)+1
    cdef int n = x.size
    cdef unsigned int i
    cdef np.ndarray[np.int_t, ndim=1] c = np.zeros(m, dtype=np.int)

    for i in xrange(n):
        c[<unsigned int>x[i]] += 1

    return c

cdef packed struct Point:
    np.float64_t f0, f1

@cython.boundscheck(False)
def sparsemaker(np.ndarray[np.float_t, ndim=2] X not None,
                np.ndarray[np.float_t, ndim=2] Y not None,
                np.ndarray[np.float_t, ndim=2] Z not None):

    cdef np.ndarray[np.float64_t, ndim=1] counts, factor
    cdef np.ndarray[np.int_t, ndim=1] row, col, repeats
    cdef np.ndarray[Point] indices

    cdef int x_, y_

    _, row = np.unique(X, return_inverse=True); x_ = _.size
    _, col = np.unique(Y, return_inverse=True); y_ = _.size
    indices = np.rec.fromarrays([row,col])
    _, repeats = np.unique(indices, return_inverse=True)
    counts = 1. / fbincount(repeats)
    Z.flat *= counts.take(repeats)

    return sp.sparse.csr_matrix((Z.flat,(row,col)), shape=(x_, y_)).toarray()

Source: (StackOverflow)

Complex numbers in Cython

What is the correct way to work with complex numbers in Cython?

I would like to write a pure C loop using a numpy.ndarray of dtype np.complex128. In Cython, the associated C type is defined in Cython/Includes/numpy/__init__.pxd as

ctypedef double complex complex128_t

so it seems this is just a simple C double complex.

However, it's easy to obtain strange behaviors. In particular, with these definitions

cimport numpy as np
import numpy as np
np.import_array()

cdef extern from "complex.h":
    pass

cdef:
    np.complex128_t varc128 = 1j
    np.float64_t varf64 = 1.
    double complex vardc = 1j
    double vard = 1.

the line

varc128 = varc128 * varf64

can be compiled by Cython but gcc can not compiled the C code produced (the error is "testcplx.c:663:25: error: two or more data types in declaration specifiers" and seems to be due to the line typedef npy_float64 _Complex __pyx_t_npy_float64_complex;). This error has already been reported (for example here) but I didn't find any good explanation and/or clean solution.

Without inclusion of complex.h, there is no error (I guess because the typedef is then not included).

However, there is still a problem since in the html file produced by cython -a testcplx.pyx, the line varc128 = varc128 * varf64 is yellow, meaning that it has not been translated into pure C. The corresponding C code is:

__pyx_t_2 = __Pyx_c_prod_npy_float64(__pyx_t_npy_float64_complex_from_parts(__Pyx_CREAL(__pyx_v_8testcplx_varc128), __Pyx_CIMAG(__pyx_v_8testcplx_varc128)), __pyx_t_npy_float64_complex_from_parts(__pyx_v_8testcplx_varf64, 0));
__pyx_v_8testcplx_varc128 = __pyx_t_double_complex_from_parts(__Pyx_CREAL(__pyx_t_2), __Pyx_CIMAG(__pyx_t_2));

and the __Pyx_CREAL and __Pyx_CIMAG are orange (Python calls).

Interestingly, the line

vardc = vardc * vard

does not produce any error and is translated into pure C (just __pyx_v_8testcplx_vardc = __Pyx_c_prod(__pyx_v_8testcplx_vardc, __pyx_t_double_complex_from_parts(__pyx_v_8testcplx_vard, 0));), whereas it is very very similar to the first one.

I can avoid the error by using intermediate variables (and it translates into pure C):

vardc = varc128
vard = varf64
varc128 = vardc * vard

or simply by casting (but it does not translate into pure C):

vardc = <double complex>varc128 * <double>varf64

So what happens? What is the meaning of the compilation error? Is there a clean way to avoid it? Why does the multiplication of a np.complex128_t and np.float64_t seem to involve Python calls?

Versions

Cython version 0.22 (most recent version in Pypi when the question was asked) and GCC 4.9.2.

Repository

I created a tiny repository with the example (hg clone https://bitbucket.org/paugier/test_cython_complex) and a tiny Makefile with 3 targets (make clean, make build, make html) so it is easy to test anything.


Source: (StackOverflow)

Speeding up pairing of strings into objects in Python

I'm trying to find an efficient way to pair together rows of data containing integer points, and storing them as Python objects. The data is made up of X and Y coordinate points, represented as a comma separated strings. The points have to be paired, as in (x_1, y_1), (x_2, y_2), ... etc. and then stored as a list of objects, where each point is an object. The function below get_data generates this example data:

def get_data(N=100000, M=10):
    import random
    data = []
    for n in range(N):
        pair = [[str(random.randint(1, 10)) for x in range(M)],
                [str(random.randint(1, 10)) for x in range(M)]]
        row = [",".join(pair[0]),
               ",".join(pair[1])]
        data.append(row)
    return data

The parsing code I have now is:

class Point:
    def __init__(self, a, b):
        self.a = a
        self.b = b

def test():
    import time
    data = get_data()
    all_point_sets = []
    time_start = time.time()
    for row in data:
        point_set = []
        first_points, second_points = row
        # Convert points from strings to integers
        first_points = map(int, first_points.split(","))
        second_points = map(int, second_points.split(","))
        paired_points = zip(first_points, second_points)
        curr_points = [Point(p[0], p[1]) \
                       for p in paired_points]
        all_point_sets.append(curr_points)
    time_end = time.time()
    print "total time: ", (time_end - time_start)

Currently, this takes nearly 7 seconds for 100,000 points, which seems very inefficient. Part of the inefficiency seems to stem from the calculation of first_points, second_points and paired_points - and the conversion of these into objects.

Another part of the inefficiency seems to be the building up of all_point_sets. Taking out the all_point_sets.append(...) line seems to make the code go from ~7 seconds to 2 seconds!

How can this be sped up? thanks.

FOLLOWUP Thanks for everyone's great suggestions - they were all helpful. but even with all the improvements, it's still about 3 seconds to process 100,000 entries. I'm not sure why in this case it's not just instant, and whether there's an alternative representation that would make it instant. Would coding this in Cython change things? Could someone offer an example of that? thanks again.


Source: (StackOverflow)

Simple wrapping of C code with cython

I have a number of C functions, and I would like to call them from python. cython seems to be the way to go, but I can't really find an example of how exactly this is done. My C function looks like this:

void calculate_daily ( char *db_name, int grid_id, int year,
                       double *dtmp, double *dtmn, double *dtmx, 
                       double *dprec, double *ddtr, double *dayl, 
                       double *dpet, double *dpar ) ;

All I want to do is to specify the first three parameters (a string and two integers), and recover 8 numpy arrays (or python lists. All the double arrays have N elements). My code assumes that the pointers are pointing to an already allocated chunk of memory. Also, the produced C code ought to link to some external libraries.


Source: (StackOverflow)

Make distutils look for numpy header files in the correct place

In my installation, numpy's arrayobject.h is located at …/site-packages/numpy/core/include/numpy/arrayobject.h. I wrote a trivial Cython script that uses numpy:

cimport numpy as np

def say_hello_to(name):
    print("Hello %s!" % name)

I also have the following distutils setup.py (copied from the Cython user guide):

from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext

ext_modules = [Extension("hello", ["hello.pyx"])]

setup(
  name = 'Hello world app',
  cmdclass = {'build_ext': build_ext},
  ext_modules = ext_modules
)

When I try to build with python setup.py build_ext --inplace, Cython tries to do the following:

gcc -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd \
-fno-common -dynamic -DNDEBUG -g -Os -Wall -Wstrict-prototypes -DMACOSX \
-I/usr/include/ffi -DENABLE_DTRACE -arch i386 -arch ppc -pipe \
-I/System/Library/Frameworks/Python.framework/Versions/2.5/include/python2.5 \
-c hello.c -o build/temp.macosx-10.5-i386-2.5/hello.o

Predictably, this fails to find arrayobject.h. How can I make distutils use the correct location of numpy include files (without making the user define $CFLAGS)?


Source: (StackOverflow)

Wrapping c++ struct template using cython

I am attempting to access the struct

template <int dim>
struct Data { 
  double X[dim];
  double Val[dim];
}; 

in cython. I was guessing the correct syntax should be something like:

cdef extern from "Lib.h" namespace "LIB":
    cdef struct Data[int dim]:
      double X[dim];
      double Val[dim];

However, I am getting an syntax error. What is the correct syntax (if it is even possible)?


Source: (StackOverflow)

Writing a Python extension in Go (golang)

I currently use Cython to link C and Python, and get speedup in slow bits of python code. However, I'd like to use go routines to implement a really slow (and very parallelizable) bit of code, but it must be callable from python. (I've already seen this question)

I'm (sort of) happy to go via C (or Cython) to set up data structures etc if necessary, but avoiding this extra layer would be good from a bug fix/avoidance point of view.

What is the simplest way to do this without having to reinvent any wheels?


Source: (StackOverflow)

Cython Speed Boost vs. Usability

I just came across Cython, while I was looking out for ways to optimize Python code. I read various posts on stackoverflow, the python wiki and read the article "General Rules for Optimization".

Cython is something which grasps my interest the most; instead of writing C-code for yourself, you can choose to have other datatypes in your python code itself.

Here is a silly test i tried,

#!/usr/bin/python
# test.pyx
def test(value):
    for i in xrange(value):
    i**2
    if(i==1000000):
        print i

test(10000001)

$ time python test.pyx

real    0m16.774s 
user    0m16.745s
sys     0m0.024s

$ time cython test.pyx

real    0m0.513s 
user    0m0.196s 
sys     0m0.052s

Now, honestly, i`m dumbfounded. The code which I have used here is pure python code, and all I have changed is the interpreter. In this case, if cython is this good, then why do people still use the traditional Python interpretor? Are there any reliability issues for Cython?


Source: (StackOverflow)