Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Per-Layer Profiling #864

Open
avik-pal opened this issue Aug 30, 2024 Discussed in #863 · 0 comments
Open

Per-Layer Profiling #864

avik-pal opened this issue Aug 30, 2024 Discussed in #863 · 0 comments
Labels
help wanted Extra attention is needed

Comments

@avik-pal
Copy link
Member

Discussed in https://github.com/orgs/LuxDL/discussions/863

Originally posted by jakubMitura14 August 30, 2024
Hello is it possible to do a profiling of a model per layer - to monitor how long each layer took to execute and preferably max gpu memory consumption per layer ?

Originally posted by avik-pal August 30, 2024

I have wanted to build something like this for quite some time. It isn't going to be very hard. I like how easy it is to generate flamegraphs in Julia (especially in vs-code), but I agree that it doesn't give the data in a higher-level granularity that is helpful for end-users.

The general sketch for it would be similar to how DebugLayer is implemented. Essentially, we do @profile_mode model then for each "leaf" model, we construct ProfiledLayer(...) which stores a common timer_output (TimerOutputs.jl) this gives a tree-view of where all the time went.

This can be further augmented with other profiling tools like memory usage (CPU / GPU) etc. A good source of inspiration would be how the pytorch profiler works.

@avik-pal avik-pal added the help wanted Extra attention is needed label Aug 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

1 participant