Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use SnoopPrecompile? #506

Open
mkitti opened this issue Jan 5, 2023 · 2 comments
Open

Use SnoopPrecompile? #506

mkitti opened this issue Jan 5, 2023 · 2 comments

Comments

@mkitti
Copy link
Contributor

mkitti commented Jan 5, 2023

We may be able to accelerate GR.jl further by using SnoopPrecompile.

using SnoopPrecompile
function __init__()
    @precompile_setup begin
        x = [1, 3, 2, 4, 5, 7, 6, 9, 8]
        y = Float64[1, 3, 2, 4, 5, 7, 6, 9, 8]
        @precompile_all_calls begin
            plot(x)
            plot(y)
            plot(x,y)
        end
    end
end
julia> using SnoopPrecompile

julia> SnoopPrecompile.verbose[] = true
true


julia> using GR
[ Info: Precompiling GR [28b8d3ca-fb5f-59d9-8090-bfdbd6d07a71]
Warning: Ignoring XDG_SESSION_TYPE=wayland on Gnome. Use QT_QPA_PLATFORM=wayland to run on Wayland anyway.
MethodInstance for GR.jlgr.isrowvec(::Vector{Int64})
MethodInstance for GR.jlgr.isvector(::StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64})
MethodInstance for vec(::StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64})
MethodInstance for GR.jlgr.isvector(::Vector{Int64})
MethodInstance for GR.jlgr.set_viewport(::Symbol, ::Vector{Int64}, ::GR.jlgr.PlotObject)
MethodInstance for /(::Float64, ::Int32)
MethodInstance for GR.setwsviewport(::Int64, ::Float64, ::Int64, ::Float64)
MethodInstance for GR.setwswindow(::Int64, ::Int64, ::Int64, ::Float64)
MethodInstance for GR.jlgr.set_window(::Symbol, ::GR.jlgr.PlotObject)
MethodInstance for GR.jlgr.Extrema64(::StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64})
MethodInstance for GR.jlgr.Extrema64(::Vector{Int64})
MethodInstance for GR.jlgr.fix_minmax(::Float64, ::Float64)
MethodInstance for GR.jlgr.fix_minmax(::Int64, ::Int64)
MethodInstance for GR.adjustlimits(::Float64, ::Float64)
MethodInstance for GR.jlgr.auto_tick(::Float64, ::Float64)
MethodInstance for GR.setwindow(::Float64, ::Float64, ::Float64, ::Float64)
MethodInstance for GR.jlgr.draw_axes(::Symbol)
MethodInstance for Base.indexed_iterate(::Tuple{Float64, Tuple{Float64, Float64}, Int64}, ::Int64)
MethodInstance for Base.literal_pow(::typeof(^), ::Float64, ::Val{2})
MethodInstance for GR.setcharheight(::Float64)
MethodInstance for GR.grid(::Float64, ::Float64, ::Int64, ::Int64, ::Int64, ::Int64)
MethodInstance for GR.axes(::Float64, ::Float64, ::Float64, ::Float64, ::Int64, ::Int64, ::Float64)
MethodInstance for GR.polyline(::StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, ::Vector{Int64})
MethodInstance for GR.jlgr.isrowvec(::Vector{Float64})
MethodInstance for GR.jlgr.isvector(::Vector{Float64})
MethodInstance for GR.jlgr.Extrema64(::Vector{Float64})
MethodInstance for GR.polyline(::StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}, ::Vector{Float64})
MethodInstance for GR.jlgr.var"#plot#34"(::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}}, ::typeof(GR.jlgr.plot), ::Vector{Int64}, ::Vararg{Union{AbstractString, Function, AbstractVector, AbstractMatrix}})
MethodInstance for GR.polyline(::Vector{Int64}, ::Vector{Float64})

@t-bltg @sjkelly I'm a bit confused about the state of precompilation between here and Plots.jl. I was under the impression we already have done the above, but I see precompile.jl is quite limited.

@sjkelly
Copy link
Contributor

sjkelly commented Jan 5, 2023

The precomile sparsity is deliberate. In the pre-1.8 world I had done some benchmarks with such statements and it was not helpful to overall load+compile time. I presume on 1.8+ that will not be the case, but it may be helpful to version block.

@mkitti
Copy link
Contributor Author

mkitti commented Jan 5, 2023

Thanks for clarifying. As @jheinen pointed out, we are seeing some significant acceleration with Julia 1.9 due to native code caching. I suspect we can do even better with more precompilation. Perhaps, we should restrict it to Julia 1.9+.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants