Intel MKL packaging

classic Classic list List threaded Threaded
9 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Intel MKL packaging

auxsvr
Hi,

There are several options to licence Intel MKL (free or community licence) , which makes including it in the OBS easier. If this is not allowed, what is the best way to package packages depending on it, e.g. numpy or pytorch?
--
Regards,
Peter
--
To unsubscribe, e-mail: [hidden email]
To contact the owner, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Intel MKL packaging

Jan Engelhardt-4
On Sunday 2017-05-07 21:22, auxsvr wrote:

>There are several options to licence Intel MKL (free or community
>licence) , which makes including it in the OBS easier. If this is not
>allowed, what is the best way to package packages depending on it, e.g.
>numpy or pytorch?

The best is not to think about it --- because we already have
a python-numpy package.
--
To unsubscribe, e-mail: [hidden email]
To contact the owner, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Intel MKL packaging

Rüdiger Meier


On 05/08/2017 01:52 AM, Jan Engelhardt wrote:
> On Sunday 2017-05-07 21:22, auxsvr wrote:
>
>> There are several options to licence Intel MKL (free or community
>> licence) , which makes including it in the OBS easier. If this is not
>> allowed, what is the best way to package packages depending on it, e.g.
>> numpy or pytorch?
>
> The best is not to think about it --- because we already have
> a python-numpy package.

The python-numpy we have already is obviously not what he wants ...

@Peter I don't think Intel MKL is allowed on OBS.

If you are able to build numpy against MKL locally then you could
download openSUSE's source rpm of python-numpy and change the spec file
so that it builds with MKL.

On the other hand you may already see quite good numpy performance if
you install openblas.

openblas as well as intel MKL provide optimized libblas.so and
liblapack.so implementations which may speed-up numpy dramatically even
without rebuild. Just make sure that numpy uses the libs from openblas
or MKL at run time. Though to use some of the extra MKL functionality
beyond lapack you may need to have it at build time.

cu,
Rudi
--
To unsubscribe, e-mail: [hidden email]
To contact the owner, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Intel MKL packaging

Cristian Rodríguez-2
In reply to this post by auxsvr


El 07-05-2017 a las 16:22, auxsvr escribió:
> Hi,
>
> There are several options to licence Intel MKL (free or community licence) , which makes including it in the OBS easier. If this is not allowed, what is the best way to package packages depending on it, e.g. numpy or pytorch?
>

IANAL, but I believe Intel MKL license is not really suitable for
distribution, at least in the OSS repositories..
--
To unsubscribe, e-mail: [hidden email]
To contact the owner, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Intel MKL packaging

Xing
In reply to this post by auxsvr
The current numpy/scipy requires openblas. But I think it might be better to compile numpy/scipy against the standard "unoptimized" lapack/blas and the uses can switch to other optimized lapack/blas
such as openblas/mkl at runtime using "update-alternatives"?

Sincerely yours
Xing

On 05/07/2017 03:22 PM, auxsvr wrote:
> Hi,
>
> There are several options to licence Intel MKL (free or community licence) , which makes including it in the OBS easier. If this is not allowed, what is the best way to package packages depending on it, e.g. numpy or pytorch?

--
To unsubscribe, e-mail: [hidden email]
To contact the owner, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Intel MKL packaging

todd rme
On Mon, May 8, 2017 at 11:01 AM, Xing <[hidden email]> wrote:

> On 05/07/2017 03:22 PM, auxsvr wrote:
>>
>> Hi,
>>
>> There are several options to licence Intel MKL (free or community licence)
>> , which makes including it in the OBS easier. If this is not allowed, what
>> is the best way to package packages depending on it, e.g. numpy or pytorch?
> The current numpy/scipy requires openblas. But I think it might be better to
> compile numpy/scipy against the standard "unoptimized" lapack/blas and the
> uses can switch to other optimized lapack/blas such as openblas/mkl at
> runtime using "update-alternatives"?
>
> Sincerely yours
> Xing

We can't support MKL using update-alternatives because it isn't
open-source. The rules of OBS forbid us from distributing it. So there
are really only two choices, an unoptimized version using lapack and
an optimized version using openblas.  I don't see an advantage to
shipping the unoptimized version.
--
To unsubscribe, e-mail: [hidden email]
To contact the owner, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Intel MKL packaging

Xing
It might be simpler for users to use MKL if numpy is built against lapack/blas. A user just need to install MKL and add MKL to the libblas.so.3 and liblapack.so.3 groups.

Also, openblas itself has 3 different flavors and it seems that numpy is linked to libopenblas_pthreads.so.0 and it cannot be changed using "update-alternatives" because only "libopenblas.so.0" can be
configured by "update-alternatives" by default.

If we want to ship an optimized numpy , is it them same if we build numpy against lapack/blas and put a "Requires: openblas" in the .spec file? Installing openblas itself will update the links to the
libblas.so.3 and liblapack.so.3, right?

Sincerely yours
Xing

On 05/08/2017 11:15 AM, Todd Rme wrote:

> On Mon, May 8, 2017 at 11:01 AM, Xing <[hidden email]> wrote:
>> On 05/07/2017 03:22 PM, auxsvr wrote:
>>> Hi,
>>>
>>> There are several options to licence Intel MKL (free or community licence)
>>> , which makes including it in the OBS easier. If this is not allowed, what
>>> is the best way to package packages depending on it, e.g. numpy or pytorch?
>> The current numpy/scipy requires openblas. But I think it might be better to
>> compile numpy/scipy against the standard "unoptimized" lapack/blas and the
>> uses can switch to other optimized lapack/blas such as openblas/mkl at
>> runtime using "update-alternatives"?
>>
>> Sincerely yours
>> Xing
> We can't support MKL using update-alternatives because it isn't
> open-source. The rules of OBS forbid us from distributing it. So there
> are really only two choices, an unoptimized version using lapack and
> an optimized version using openblas.  I don't see an advantage to
> shipping the unoptimized version.

--
To unsubscribe, e-mail: [hidden email]
To contact the owner, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Intel MKL packaging

todd rme
On Mon, May 8, 2017 at 11:36 AM, Xing <[hidden email]> wrote:

> On 05/08/2017 11:15 AM, Todd Rme wrote:
>>
>> On Mon, May 8, 2017 at 11:01 AM, Xing <[hidden email]> wrote:
>>>
>>> On 05/07/2017 03:22 PM, auxsvr wrote:
>>>>
>>>> Hi,
>>>>
>>>> There are several options to licence Intel MKL (free or community
>>>> licence)
>>>> , which makes including it in the OBS easier. If this is not allowed,
>>>> what
>>>> is the best way to package packages depending on it, e.g. numpy or
>>>> pytorch?
>>>
>>> The current numpy/scipy requires openblas. But I think it might be better
>>> to
>>> compile numpy/scipy against the standard "unoptimized" lapack/blas and
>>> the
>>> uses can switch to other optimized lapack/blas such as openblas/mkl at
>>> runtime using "update-alternatives"?
>>>
>>> Sincerely yours
>>> Xing
>>
>> We can't support MKL using update-alternatives because it isn't
>> open-source. The rules of OBS forbid us from distributing it. So there
>> are really only two choices, an unoptimized version using lapack and
>> an optimized version using openblas.  I don't see an advantage to
>> shipping the unoptimized version.
>
> It might be simpler for users to use MKL if numpy is built against
> lapack/blas. A user just need to install MKL and add MKL to the libblas.so.3
> and liblapack.so.3 groups.
>
> Also, openblas itself has 3 different flavors and it seems that numpy is
> linked to libopenblas_pthreads.so.0 and it cannot be changed using
> "update-alternatives" because only "libopenblas.so.0" can be configured by
> "update-alternatives" by default.
>
> If we want to ship an optimized numpy , is it them same if we build numpy
> against lapack/blas and put a "Requires: openblas" in the .spec file?
> Installing openblas itself will update the links to the libblas.so.3 and
> liblapack.so.3, right?

That depends on whether there are compile-time optimizations.
Considering that numpy has a compile-time switch for the library you
want to build against rather than just using whatever is available and
changing at runtime, and that specific changes needed to be made to
numpy to support openblas, I would think that runtime switching is
different than choosing a target at compile time.
--
To unsubscribe, e-mail: [hidden email]
To contact the owner, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Intel MKL packaging

Xing
Thanks! Now I did recall that I have to hack some files in numpy in order to switch the LAPACK/BLAS implementations at runtime when numpy was built against the unoptimzed LAPACK/BLAS. But that patch
no longer works (an no longer needed) after numpy-1.10 or so.

It seems that if someone need to use mkl, they do need to install MKL first and then compile numpy from source.

Sincerely yours
Xing

On 05/08/2017 11:57 AM, Todd Rme wrote:

> On Mon, May 8, 2017 at 11:36 AM, Xing <[hidden email]> wrote:
>> On 05/08/2017 11:15 AM, Todd Rme wrote:
>>> On Mon, May 8, 2017 at 11:01 AM, Xing <[hidden email]> wrote:
>>>> On 05/07/2017 03:22 PM, auxsvr wrote:
>>>>> Hi,
>>>>>
>>>>> There are several options to licence Intel MKL (free or community
>>>>> licence)
>>>>> , which makes including it in the OBS easier. If this is not allowed,
>>>>> what
>>>>> is the best way to package packages depending on it, e.g. numpy or
>>>>> pytorch?
>>>> The current numpy/scipy requires openblas. But I think it might be better
>>>> to
>>>> compile numpy/scipy against the standard "unoptimized" lapack/blas and
>>>> the
>>>> uses can switch to other optimized lapack/blas such as openblas/mkl at
>>>> runtime using "update-alternatives"?
>>>>
>>>> Sincerely yours
>>>> Xing
>>> We can't support MKL using update-alternatives because it isn't
>>> open-source. The rules of OBS forbid us from distributing it. So there
>>> are really only two choices, an unoptimized version using lapack and
>>> an optimized version using openblas.  I don't see an advantage to
>>> shipping the unoptimized version.
>> It might be simpler for users to use MKL if numpy is built against
>> lapack/blas. A user just need to install MKL and add MKL to the libblas.so.3
>> and liblapack.so.3 groups.
>>
>> Also, openblas itself has 3 different flavors and it seems that numpy is
>> linked to libopenblas_pthreads.so.0 and it cannot be changed using
>> "update-alternatives" because only "libopenblas.so.0" can be configured by
>> "update-alternatives" by default.
>>
>> If we want to ship an optimized numpy , is it them same if we build numpy
>> against lapack/blas and put a "Requires: openblas" in the .spec file?
>> Installing openblas itself will update the links to the libblas.so.3 and
>> liblapack.so.3, right?
> That depends on whether there are compile-time optimizations.
> Considering that numpy has a compile-time switch for the library you
> want to build against rather than just using whatever is available and
> changing at runtime, and that specific changes needed to be made to
> numpy to support openblas, I would think that runtime switching is
> different than choosing a target at compile time.

--
To unsubscribe, e-mail: [hidden email]
To contact the owner, e-mail: [hidden email]

Loading...