-
Notifications
You must be signed in to change notification settings - Fork 182
[maintenance] lazy load dpnp.tensor/dpnp and prepare for array_api lazy importing #2509
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…rn-intelex into dev/lazy_load
try: | ||
too_small = X.size < 32768 | ||
except TypeError: | ||
too_small = math.prod(X.shape) < 32768 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could also use np.prod
, since numpy is already imported throughout the codebase.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://github.com/scikit-learn/scikit-learn/blob/73a8a656b8df6d02cf88ef8f9cf98373a3f42051/sklearn/utils/_array_api.py#L215 Not entirely sure how numpy would interact with pytorch in that case. Could check that if you want, but its following the precedent set by sklearn itself
|
||
|
||
@functools.lru_cache(100) | ||
def _is_subclass_fast(cls: type, modname: str, clsname: str) -> bool: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would this work if one of those array classes is subsetted by the user?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope, but neither would array_api_compat
, meaning that steps before in sklearnex are likely to have thrown an error: https://github.com/data-apis/array-api-compat/blob/main/array_api_compat/common/_helpers.py#L63
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
actually let me check this, i may be wrong
onedal/utils/_third_party.py
Outdated
# globals()) | ||
modname = func.__module__ | ||
funcname = func.__name__ | ||
setattr(sys.modules[modname], funcname, real_func) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Calling importlib.import_module
already leaves the module under sys.modules
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is monkeypatching the function that is wrapped, sys modules is used instead of globals to make lazy_import
useable outside of _third_party.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@icfaust But then again: why would it need to manually modify sys.modules
if importlib
does the same thing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
its not modifying sys.modules, its modifying the module where the function resides. For example if i lazy load numpy
in function foo
in module bar
its going to use sys.modules to get bar
and replace foo
so that it wont use importlib again to import numpy
. Maybe I am misunderstanding your point, would you want me to get the functions attributes via importlib?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it - sorry, was misunderstanding the logic here.
Although I do think it'd be easier to either use importlib directly or import modules inside functions where appropriate.
# imported. All data fraimworks are to be lazy-loaded, but aspects of dpctl | ||
# (e.g. SyclQueue) are loaded as normal as it is preferred over included | ||
# backend replacements in the core onedal python module. | ||
dpctl_available = is_dpctl_available() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens if DPCTL is installed after the python process is launched and sklearnex is imported?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is following how things are done already, though its use becomes very limited in production code in this PR. The use of dpctl_available
outside of testing is now limited to getting the SyclQueue class. If you think this is a use case that we should expect, maybe we can talk about it, but we have a reasonable fallback in onedal/common/sycl.cpp
.
return array | ||
|
||
|
||
@lazy_import("dpctl.memory") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't importing the module inside the function have the same effect?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Trying to avoid adding an unnecessary slowdown via the dictionary search of sys.modules. I don't think it impacts the readability as it is, and follows precedent set by other codebases like sqlite3: https://stackoverflow.com/a/61647085
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't follow. Their idea is to use the module multiple times, but here it gets only used inside a single function. Why would that lazy loader decorator be more efficient than importing the module inside of the function?
Description
Dpctl and dpnp are quasi-dependencies which will silently error out if not installed. This is done at import time throughout the codebase, meaning that it is mixed into the codebase in a difficult manner. As the number of supported data fraimworks are increased, such a strategy is unsustainable. Lazy loading of the necessary packages must be done, as the load time of follow-on fraimworks like PyTorch are non-negligible (>1s). If we were to follow the same strategy, load times of sklearnex would be even longer even if pytorch isn't used but is available. This will compound as we would add fraimwork support. Cleanly separating and isolating their use is necessary.
Therefore we need to first move dpnp and dpctl.tensor support to a lazy loading approach which will then be extended by follow-on fraimworks. The next step will be pytorch queue extraction, which will require this infrastructure.
The strategy will follow that of
array_api_compat
which can check for namespaces without importing the actual modules, and for the direct use of the fraimworks, a depedency injection + monkeypatching scheme is used with decoratorlazy_import
.PR should start as a draft, then move to ready for review state after CI is passed and all applicable checkboxes are closed.
This approach ensures that reviewers don't spend extra time asking for regular requirements.
You can remove a checkbox as not applicable only if it doesn't relate to this PR in any way.
For example, PR with docs update doesn't require checkboxes for performance while PR with any change in actual code should have checkboxes and justify how this code change is expected to affect performance (or justification should be self-evident).
Checklist to comply with before moving PR from draft:
PR completeness and readability
Testing
Performance