Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speed up wave.resource module #352

Open
wants to merge 38 commits into
base: develop
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
38 commits
Select commit Hold shift + click to select a range
fcc910e
fix assignment in type_handling
akeeste Sep 12, 2024
b19217c
temporary testing file
akeeste Sep 16, 2024
5addc17
initial conversion of energy_period and frequency_moment to DataArray
akeeste Sep 16, 2024
ac91b92
energy_period working with variety of types and converting to dataArr…
akeeste Sep 16, 2024
5d914a3
extend xr.dataarray basis to all wave.resource functions
akeeste Sep 24, 2024
8683b88
remove testing script
akeeste Sep 24, 2024
e860034
black formatting
akeeste Sep 24, 2024
3382825
fix most test formatting
akeeste Sep 24, 2024
c5241af
use dataarrays instead of datasets in wave.performance
akeeste Sep 30, 2024
a2d5f61
revert surface_elevation function back to datasets
akeeste Sep 30, 2024
e016fff
Revert "revert surface_elevation function back to datasets"
akeeste Oct 1, 2024
a719620
allow datasets, 2d dataframes. Update test formatting appropriately
akeeste Oct 1, 2024
d585cb5
simplify and improve robustness of convert_to_dataarray for 1-var dat…
akeeste Oct 2, 2024
ac5b436
update test formatting
akeeste Oct 2, 2024
afc7f8c
clean up frequency_bin and method checks in elevation_surface
akeeste Oct 2, 2024
8f1647f
update and annotate type_handling
akeeste Oct 2, 2024
c0d72d0
black formatting
akeeste Oct 2, 2024
8dddf42
minor type fix
akeeste Oct 2, 2024
3a170ff
update type references in loads
akeeste Oct 2, 2024
42a85d8
update type references in loads - v2
akeeste Oct 2, 2024
28b847b
black formatting
akeeste Oct 10, 2024
2f04e87
fix call to fillna() in MLER example
akeeste Oct 10, 2024
d3fcc02
fix references to k in MLER example
akeeste Oct 10, 2024
07b5033
add variable names to Hm0, Te, and Tp for DataFrame creation
akeeste Oct 10, 2024
4811202
update pd.Series naming in examples
akeeste Oct 10, 2024
77c1980
fix typo in pacwave notebook
akeeste Oct 11, 2024
6dbbc18
add variable names after data conversion
akeeste Oct 14, 2024
939d730
Merge branch 'develop' of https://github.com/MHKiT-Software/MHKiT-Pyt…
akeeste Oct 14, 2024
664d2e9
update pacwave example
akeeste Oct 14, 2024
dad0d0a
add type check to all wave.resource variable naming
akeeste Oct 14, 2024
853521d
update wave example with new data types
akeeste Oct 14, 2024
28e67f6
pull buoy name from metadata in cdip example
akeeste Oct 14, 2024
69e694b
tighten up example timing
akeeste Oct 14, 2024
243a999
address minor review comments, add some type checking tests
akeeste Oct 17, 2024
a7241f3
complicated dataset and dataframe handling in surface_elevation and e…
akeeste Oct 17, 2024
d13bb22
restore and simplify dataset input to elevation_spectrum, surface_ele…
akeeste Oct 17, 2024
0b30366
black formatting
akeeste Oct 17, 2024
0c5f5be
update missed docstring
akeeste Oct 17, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions .github/workflows/generate_notebook_matrix.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,14 +14,14 @@
"cdip_example.ipynb": 420,
"Delft3D_example.ipynb": 180,
"directional_waves.ipynb": 180,
"environmental_contours_example.ipynb": 360,
"extreme_response_contour_example.ipynb": 360,
"extreme_response_full_sea_state_example.ipynb": 360,
"extreme_response_MLER_example.ipynb": 360,
"environmental_contours_example.ipynb": 240,
"extreme_response_contour_example.ipynb": 240,
"extreme_response_full_sea_state_example.ipynb": 240,
"extreme_response_MLER_example.ipynb": 240,
"loads_example.ipynb": 180,
"metocean_example.ipynb": 180,
"mooring_example.ipynb": 240,
"PacWave_resource_characterization_example.ipynb": 780,
"PacWave_resource_characterization_example.ipynb": 240,
"power_example.ipynb": 180,
"qc_example.ipynb": 180,
"river_example.ipynb": 180,
Expand Down
13 changes: 10 additions & 3 deletions examples/PacWave_resource_characterization_example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1119,12 +1119,19 @@
"\n",
" Tz_list.append(resource.average_zero_crossing_period(year_data.T))\n",
"\n",
"# Concatenate list of Series into a single DataFrame\n",
"# Concatenate each list of Series into a single Series\n",
"Te = pd.concat(Te_list, axis=0)\n",
"Tp = pd.concat(Tp_list, axis=0)\n",
"Hm0 = pd.concat(Hm0_list, axis=0)\n",
"J = pd.concat(J_list, axis=0)\n",
"Tz = pd.concat(Tz_list, axis=0)\n",
"\n",
"# Name each Series and concat into a dataFrame\n",
"Te.name = 'Te'\n",
"Tp.name = 'Tp'\n",
"Hm0.name = 'Hm0'\n",
"J.name = 'J'\n",
"Tz.name = 'Tz'\n",
"data = pd.concat([Hm0, Te, Tp, J, Tz], axis=1)\n",
"\n",
"# Calculate wave steepness\n",
Expand Down Expand Up @@ -1590,7 +1597,7 @@
" J = []\n",
" for i in range(len(result)):\n",
" b = resource.jonswap_spectrum(f, result.Tp[i], result.Hm0[i])\n",
" J.extend([resource.energy_flux(b, h=399.0).values[0][0]])\n",
" J.extend([resource.energy_flux(b, h=399.0).item()])\n",
"\n",
" result[\"J\"] = J\n",
" results[N] = result\n",
Expand Down Expand Up @@ -1622,7 +1629,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
"version": "3.12.7"
}
},
"nbformat": 4,
Expand Down
6 changes: 3 additions & 3 deletions examples/cdip_example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -576,7 +576,7 @@
"Hs = buoy_data[\"data\"][\"wave\"][\"waveHs\"]\n",
"Tp = buoy_data[\"data\"][\"wave\"][\"waveTp\"]\n",
"Dp = buoy_data[\"data\"][\"wave\"][\"waveDp\"]\n",
"buoy_name = buoy_data[\"data\"][\"wave\"].name\n",
"buoy_name = buoy_data[\"metadata\"][\"name\"]\n",
"ax = graphics.plot_compendium(Hs, Tp, Dp, buoy_name)"
]
},
Expand All @@ -590,7 +590,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
Expand All @@ -604,7 +604,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
"version": "3.12.7"
}
},
"nbformat": 4,
Expand Down
18 changes: 13 additions & 5 deletions examples/environmental_contours_example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -647,9 +647,13 @@
" Hm0_list.append(resource.significant_wave_height(year_data.T))\n",
" Te_list.append(resource.energy_period(year_data.T))\n",
"\n",
"# Concatenate list of Series into a single DataFrame\n",
"# Concatenate each list of Series into a single Series\n",
"Te = pd.concat(Te_list, axis=0)\n",
"Hm0 = pd.concat(Hm0_list, axis=0)\n",
"\n",
"# Name each Series and concat into a dataFrame\n",
"Te.name = 'Te'\n",
"Hm0.name = 'Hm0'\n",
"Hm0_Te = pd.concat([Hm0, Te], axis=1)\n",
"\n",
"# Drop any NaNs created from the calculation of Hm0 or Te\n",
Expand Down Expand Up @@ -800,7 +804,7 @@
"source": [
"## Resource Clusters\n",
"\n",
"Often in resource characterization we want to pick a few representative sea state to run an alaysis. To do this with the resource data in python we reccomend using a Gaussian Mixture Model (a more generalized k-means clustering method). Using sckitlearn this is very straigth forward. We combine our Hm0 and Te data into an N x 2 numpy array. We specify our number of components (number of representative sea states) and then call the fit method on the data. Fianlly, using the methods `means_` and `weights` we can organize the results into an easily digestable table."
"Often in resource characterization we want to pick a few representative sea state to run an alaysis. To do this with the resource data in python we reccomend using a Gaussian Mixture Model (a more generalized k-means clustering method). Using sckitlearn this is very straight forward. We combine our Hm0 and Te data into an N x 2 numpy array. We specify our number of components (number of representative sea states) and then call the fit method on the data. Fianlly, using the methods `means_` and `weights` we can organize the results into an easily digestable table."
]
},
{
Expand Down Expand Up @@ -933,9 +937,13 @@
" Hm0_list.append(resource.significant_wave_height(year_data.T))\n",
" Tp_list.append(resource.peak_period(year_data.T))\n",
"\n",
"# Concatenate list of Series into a single DataFrame\n",
"# Concatenate each list of Series into a single Series\n",
"Tp = pd.concat(Tp_list, axis=0)\n",
"Hm0 = pd.concat(Hm0_list, axis=0)\n",
"\n",
"# Name each Series and concat into a dataFrame\n",
"Tp.name = 'Tp'\n",
"Hm0.name = 'Hm0'\n",
"Hm0_Tp = pd.concat([Hm0, Tp], axis=1)\n",
"\n",
"# Drop any NaNs created from the calculation of Hm0 or Te\n",
Expand Down Expand Up @@ -1116,7 +1124,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.9.13 ('.venv': venv)",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
Expand All @@ -1130,7 +1138,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
"version": "3.12.4"
},
"vscode": {
"interpreter": {
Expand Down
15 changes: 7 additions & 8 deletions examples/extreme_response_MLER_example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -197,11 +197,11 @@
"\n",
"# generate wave number k\n",
"k = resource.wave_number(wave_freq, 70)\n",
"k = k.fillna(0)\n",
"np.nan_to_num(k, 0)\n",
"\n",
"peakHeightDesired = Hs / 2 * 1.9\n",
"mler_norm = extreme.mler_wave_amp_normalize(\n",
" peakHeightDesired, mler_data, sim, k.k.values\n",
" peakHeightDesired, mler_data, sim, k\n",
")"
]
},
Expand Down Expand Up @@ -239,7 +239,7 @@
}
],
"source": [
"mler_ts = extreme.mler_export_time_series(RAO.values, mler_norm, sim, k.k.values)\n",
"mler_ts = extreme.mler_export_time_series(RAO.values, mler_norm, sim, k)\n",
"mler_ts.plot(xlabel=\"Time (s)\", ylabel=\"[m] / [*]\", xlim=[-100, 100], grid=True)"
]
},
Expand All @@ -256,7 +256,7 @@
"hash": "6acc4428af86beefd6565514d05fe9ce8e024621fafadd3627cdac7b7bd68bc4"
},
"kernelspec": {
"display_name": "Python 3.8.10 64-bit ('MHKdev': conda)",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
Expand All @@ -270,10 +270,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
},
"orig_nbformat": 4
"version": "3.12.4"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}
8 changes: 6 additions & 2 deletions examples/extreme_response_contour_example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -72,9 +72,13 @@
" Hm0_list.append(resource.significant_wave_height(year_data.T))\n",
" Te_list.append(resource.energy_period(year_data.T))\n",
"\n",
"# Concatenate list of Series into a single DataFrame\n",
"# Concatenate each list of Series into a single Series\n",
"Te = pd.concat(Te_list, axis=0)\n",
"Hm0 = pd.concat(Hm0_list, axis=0)\n",
"\n",
"# Name each Series and concat into a dataFrame\n",
"Te.name = 'Te'\n",
"Hm0.name = 'Hm0'\n",
"Hm0_Te = pd.concat([Hm0, Te], axis=1)\n",
"\n",
"# Drop any NaNs created from the calculation of Hm0 or Te\n",
Expand Down Expand Up @@ -323,7 +327,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
"version": "3.12.4"
}
},
"nbformat": 4,
Expand Down
8 changes: 6 additions & 2 deletions examples/extreme_response_full_sea_state_example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -75,9 +75,13 @@
" Hm0_list.append(resource.significant_wave_height(year_data.T))\n",
" Te_list.append(resource.energy_period(year_data.T))\n",
"\n",
"# Concatenate list of Series into a single DataFrame\n",
"# Concatenate each list of Series into a single Series\n",
"Te = pd.concat(Te_list, axis=0)\n",
"Hm0 = pd.concat(Hm0_list, axis=0)\n",
"\n",
"# Name each Series and concat into a dataFrame\n",
"Te.name = 'Te'\n",
"Hm0.name = 'Hm0'\n",
"Hm0_Te = pd.concat([Hm0, Te], axis=1)\n",
"\n",
"# Drop any NaNs created from the calculation of Hm0 or Te\n",
Expand Down Expand Up @@ -573,7 +577,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
"version": "3.12.4"
}
},
"nbformat": 4,
Expand Down
Loading
Loading