Skip to content

[MRG] Joblib 0.9.4 #6179

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jan 20, 2016
Merged

[MRG] Joblib 0.9.4 #6179

merged 2 commits into from
Jan 20, 2016

Conversation

ogrisel
Copy link
Member

@ogrisel ogrisel commented Jan 18, 2016

Here is a code sync for joblib 0.9.4. In particular it solves a bug that can cause silently wrong results as reported in #6063.

Therefore I would like to make backport this in the maintenance branch to release 0.17.1 ASAP.

@ogrisel ogrisel modified the milestones: 0.1.7.1, 0.17.1 Jan 18, 2016
@@ -262,7 +259,7 @@ class Parallel(Logger):
pre_dispatch: {'all', integer, or expression, as in '3*n_jobs'}
The number of batches (of tasks) to be pre-dispatched.
Default is '2*n_jobs'. When batch_size="auto" this is reasonable
default and the multiprocessing workers should never starve.
default and the multiprocessing workers shoud never starve.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo here

@ogrisel
Copy link
Member Author

ogrisel commented Jan 19, 2016

Thanks @aabadie but those typos should be fixed upstream, not in the scikit-learn repo.

@ogrisel
Copy link
Member Author

ogrisel commented Jan 19, 2016

It's weird that we did not catch them while reviewing the joblib PRs...

@ogrisel
Copy link
Member Author

ogrisel commented Jan 19, 2016

Fixed in joblib/joblib@72e1625, will be included in the next joblib release.

@aabadie
Copy link
Contributor

aabadie commented Jan 19, 2016

Thanks @ogrisel.

It's weird that we did not catch them while reviewing the joblib PRs...

Indeed, I was surprised as well.

@amueller
Copy link
Member

@ogrisel so bugfix release? There was also the gradient boosting issue...

@ogrisel
Copy link
Member Author

ogrisel commented Jan 20, 2016

@amueller I am indeed in favor of a bugfix release as the joblib bug can cause silent errors (wrong CV results although I did not reproduce the issue when using cross_val_score on my box).

There was also the gradient boosting issue...

Do you have the issue number handy?

Do you have other important bug fixes in mind?

Let me merge this PR and backport it to 0.17.X.

ogrisel added a commit that referenced this pull request Jan 20, 2016
@ogrisel ogrisel merged commit dfa516c into scikit-learn:master Jan 20, 2016
@amueller
Copy link
Member

I'm a bit out of the loop at the moment because I try to focus on writing the book. @jmschrei can you point us to the gbrt speed bugfix?

@amueller
Copy link
Member

Other fixes: #6157 #5852 #5721 (this might already be in the branch?) #5773 (might also be in the branch). Probably also #5012 #4995

@GaelVaroquaux
Copy link
Member

GaelVaroquaux commented Jan 20, 2016 via email

@jmschrei
Copy link
Member

The fix was #5858.

@amueller
Copy link
Member

#6147 is that a joblib issue?

@ogrisel
Copy link
Member Author

ogrisel commented Jan 26, 2016

#6147 is that a joblib issue?

No it's a weird behavior of numpy.memmap that is highlighted by the fact that we reduced the threshold to use memmap in 0.17 vs 0.16 (1MB instead of 100MB).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants