Day 4 - HFIP Annual Session 7
Session 7 Summary & Key Ideas:
Developer Support Discussion:
DTC has supported governance and support for transition from community model add-ons and upgrades into HAFS
(previously HWRF). We are in the process of transitioning numerous HFIP and HAFS support activities from DTC to
EPIC. EPIC will also support training tutorials, workshops and meetings.
NHC (Wallace Hogsett) update on NHC wind-speed probability model. New version will use full 2D vortex wind
field, as opposed to climatological wind field. New version will also incorporate land surface model for wind
reduction over land.
Mike Ek physics across scales - need to develop physics in a hierarchical development
system, from single column and basic 2D simulations, to coarse simulations, to high resolution, to eventual very
high resolution fully coupled simulations.
RDHPC presentations:Radhika: Plans are in place to bring aboard a new system to replace Jet with new Fairmont
system. There have been recent upgrades to Gaea, and there is a planned upgrade to Hera in spring 2024. Additional
funding for cloud computing will also be integral part of the path moving forward. Inflation Reduction Act (IRA)
will support ~$190M in HPC infrastructure upgrades from FY23-25. There is an attempt in place to unify HPC platforms
to expedite transition of code across platforms.
Avichal: we need to reduce the effort/FTE burden in running code across multiple platforms.
Indiviglio: Upcoming change in HPC landscape with machine learning and AI. GPUs present new opportunities to ,
but require some effort to adopt code and users to optimize their use. Increasing complexity porting across multiple
machine platforms, including cloud computing. Shrinking compiler choices, growing software choices.
Future of HFIP and Strategic Planning:
Need to establish new 2025 strategic plan with updated 5 and 10 year goals. First strategic plan was in 2009,
last updated in 2019. Need clear goals, as was done in the past.
Transition from GSI to JEDI will be a major step that we need to take.
Improved probabilistic guidance, quantification of uncertainty. Can we better leverage reanalysis, or machine
learning for this?
Enhance communication, particularly with respect to risk uncertainty
Need dedicated HPC support, and strategy for continued adoption of cloud.
R2O enhancements and collaboration with HOT, DTC, EPIC, and UFS R2O. Broaden expertise and expand community,
particularly through EPIC and UFS.
Need to increase diversity in the models, particularly between HAFS-A and -B. Physics diversity helps, but also need
diversity in the initial conditions.
Need more focus on storm structure, particularly in terms of size of the wind field. Not only does a larger wind
field impact a larger region, but it also is the main driver of storm surge.
Why are there differences in model forecasts in large bust cases? For example, why was HAFS-B consistently better
than HAFS-A for Philippe (2023)?
Jason Sippel - what can the global system do to make our lives easier on the TC side? At which point does is the GFS
cycling good enough that HAFS does not have to run its own cycling? 6-km resolution DA using JEDI is consistent with
GFSv18 development ~5 years out.
NHC would like to see us optimize and verify the wind speed probability thresholds. Need to make sure we get data
into AWIPS.
Need to think about physics issues, particularly in the “gray zone” for parameterized deep convection.
NHC: top priority should be to make sure spread-error score is appropriately tuned for track and intensity at all
lead times.
Need to develop improved precipitation forecasts for TCs. Inland freshwater flooding is a major source of damage and
loss of life.
Need to verify probabilistic wind swath and probabilistic precipitation forecasts. Does a 40% chance of wind above X
threshold or precipitation above Y threshold verify 40% of the time?
Frank Marks proposes that we create two new Tiger Teams
One for HAFS DA JEDI transition team that develops an implementation plan and oversees progress and eventual
transition. Target HAFSv3 FY25 transition, coincident with GFSv18. Jason Sippel
Second team will address uncertainty. First develop a proof of concept, develop new probabilistic tools and
products, and then calibrate and verify these products. Explore cost versus benefit of single-model versus
multi-model ensemble. Leverage ML approaches, such as DESI and TCANE.
Aaron Poyer closing remarks: Last meeting of HEOB: Ken Graham and Steve Thur are both extremely supportive of
HFIP past and future. Looking to getting additional SES representation at next year’s HFIP annual meeting. Dates
TBD.
Offline closing discussion with Frank: we should continue to lean forward and propose an additional 50%
improvement for track and intensity, including RI. HFIP has a history of success in meeting these goals, and these
numbers tend to resonate with Congress. We may want to change the priory timeframe assigned to the goals from 48h to
72h, to better align with the actionable priorities outlined by the Lee County EM and FEMA talks.
Session 7A: Developer Support and HPC
- [Question about how physics changes are coordinated]
- Great example of something that fits into hierarchy. Inherently becomes part of convection scheme which is 3D and high resolution but trying to exercise some of the aspects of the hierarchy to evaluate what kind of things are needed. What kinds of observations are needed. All variables at the process level. How can we get it in. Use of the CCPP develop DTC and sharing community allows us to go from one step to another step in a more efficient way than doing one test. If you want a robust finding try to execute some of simpler steps in hierarchy and then go into regional global and apple model to evaluate before you spend a lot of compute passport
- In terms of hierarchy there will be priority. Common features like on land would be YMP. What are priority places where you would like to pack new things down, let's fix this and go from there
- How do you prioritize? Comment about OMP is exactly right bc characteristics of land are diff than regular NOAA. As a physics comm for testing and eval need to see what bubbles to the top, what are most glaring issues? Can we develop a list of top 10 of biggest issues going on in regional global HAFS etc even what we know about parameterizations and maintaining that and focusing time and energy on these things. Dgc visitor projects could be great example
- [To Maoyi] Major two hiccups on ORION over the years. Associated with release of spackstack. Sometimes works sometimes doesn't. Can't spend extensive hours to figure it out. If we have better comms for an announcement, what is update or upgrades one of those? The volume halfway of the aspects library. How can we get into this probem of internal release?
- Spackstack is collab btwen EMC JSSC and EPIC. Not good governance on how info is shared with other UFS teams partly bc spackstack is not approved for operational implementation at this moment but it will be. So transition issue to spackstack. Submit high level ticket to UFS community so we can track issue. important issue to track.
- [To Wallace and Mark] Question about overland wind reduction. Resolution of 1km but then model has 10km resolution. Are you using surface roughness or some other integrated roughness? What do you use for represented surface roughness?
- At 1km we have mass service database that is appropriate for that. Had to average surface reduction factors over scale up with the appropriate for the particular model run and eliminated that's where we want to go down as small as possible to retain at least some of the detail. On 5-10 km can't go back that far.
- Hierarchical test system, is that possible to do tests on critical development that we want to test step by step. Change related to physics part.
- Not some kind of software package but is process, systems engineering process. Should talk about it and see what we can bring to the table. 2D is notional. Small domain great place to do initial testing on convection. It is an R2O process but this is def where DPC working with the community
- [To Frank and Radhika] Where do observations fit into your hierarchical system/ Need to evaluate by characterization of actual process from observations in somebody. How does that fit in? How do you view it with today? We have ability to go out and make observations that we then use
- You can go with typical MWP metrics. Those are integrated in the combination of things working together. In any number of things that are process instead of just saying we don't observe those things. Data mining is in order here. Look at DOE arm sites over land for example. Squeezing information out of observation we have and seeing how that will help outside models. Line everyint up on the table as observation dn then look at where we need them or addressing what we think we are. And then diagnose where are our biases.
- Talk to Aaron P about monthly seminars
- As we move forward, power draw of the HPC systems. GPUs use less power and are becoming common in HPC clusters. NCAR colleagues are using GPUs. Streamline pipeline processes into the GPUs, is that part of your plan?
- Yes. We currently support GPUs. There is a confidence for GPUs. Latest compilers will be ready for that. GPUs are high demand across the country/world bc everyone using for AI work. NVIDIA said supply chain is looking 6-12 months. Let Rhadika know if you need GPUs and she'll put that in as early as she can. Work with vendors, established personal relateionships with account reps with nvidia and others
- [To Radhika, re: what is the best way to reach out to supporting]
- Coming up with process making sure we understand the reqs of what is needed. EPIC supporting efforts. Need to understand from the user community the issues faced with proting. Line up capabilities and exploratories necessary for those particular projects. Have the team and now matter of lining up what are the issues and who will be supporting what project across all the systems. Try to understand what issues users are facing and moving on to doing matching.
- Do you feel EPIC asked a user in this context?
- Sure. Think we can have discussions and we can work to come up with how it's going to support you. You are also supporting your users. Need to plan this out so not stepping on boundaries.
- [Question from Xuejin]
- When you plan HPC acquisition it also considers software and training goes into that acquisition. Most scientists still code based on CPU. When we transition to GPU, consider a lot of the software plus training, need money for that. If you have machine sits there costs money.
- The community involvement. Everyone needs to have that. That is common software animal been asking provide me with the common architecture to solve your platform so I could run my systems and do some benchmark testing for you using those applications so the system can support those applications. Scientists need time to adapt to new way of writing it.
Session 7B.1: HPC Future and Strategic Planning
- What is your vision of the RD HPC compared to operational HPC? How much do you buy for RD HPC and TCS and how much do you buy for operation?
- Don't have a “right answer”. Ongoing discussion. Needs to be more R&D. What will happen over time is that we will get to a phase whatever that may be but they may have a baseline and we hve to be flexible. Have to work quickly. In same line of thinking of how we need to make investments, 10 year outlook and how we will grow
- Conversation with Bill R from AOML. Edges of workflow being more agnostic than they are. Is that approaching smething like distributing computing where we put DA on one system so they get restricted data access and then other parts of the workflow there doesn't have to be restricted access?
- The way we do it is the way we've always done it and we have options to make that better. See who what where when how better. How we will stretch workflow out, look at in a perfect world as an example as you would start processing things like DA processing of inputs faster, put things in right place right time, how to make that whole chain speed up, stretching the processing out over a bunch of diff assets and maybe a bunch or a box .Or if you're taking sensor data
- More a matter of have restrictions on where observations can be put onto diff systems so if we ran the DA and digested into relatively compact package of data that was no longer considered restrictive we could throw that to have less secure cloud server and we can analyze quickly. Cloud seems ideal for really peak amt of compute that we need once per year but restricted data makes that challenging
- Work for it on a deal with a couple times with partners, not easy. How do we make data available in ways that we haven't before? How do we find solutions to make things available to the right people at the right time? How do we make it so that you don't have to understand the point? If we have a system that covers NOAA we can build access into that. Market is not there yet. We're working towards it.
- [Avichal] Pleased to hear from Rhadika's presentation that there is an attempt to unify systems configurations across. Will go a long way for community developers to support code and inside the code the uniformity of utilities and libraries. Healthy development.
- [Re: access to HBS subtypes] from some simple systems like HERA is relatively straightforward and we have access to everything. In the context of conducting RT experiments, having direct access quickly don't have to spend time and worry about past step data transfers. Not the case for assets. Those are useful machines and time goes on our performance has been becoming more, but access to HPSS remains an issue. Something to keep in mind if this can somehow be mitigated.
- Make things more open on one side, causes issues on the other side. Software there are ways we can think about solving this through software or hashing. More tech out there we can now look at that would maybe help these things. We have all this data in diff places and we want to make it available as much as possible. What does that framework look like? A lot of things out there that treat your data like a pipeline module, how to make those things usable. Look at it from multiple perspectives to make it easier.
- [Xuejin] In slide you put LLM - not everyone knows what it is
- Large Language Model. Things like ChatGPT. This is a policy issue within NOAA that we're working through. DOC issued policy on this and now out to NOAA. Working towards pilot programs for LLMs. Really good dev in this space and makes it more accessible for us. Google, microsoft, amazon all have ways to get to those models without exposing your data or exposing into other things while training these models. Concerns are out there but working for a world where they won't be a concern much longer. ChatGPT, BARD, will still be something we may not engage with but there are versions that we will.
- Basically generative AI. Yesterday Dev gave a talk about information generated from AI. NOAA has a policy not to use that. Application just start going from academia. We need this kind of exposure and education. Training is important for us and continue to voice that when you do big purchase don't forget to leave it to training scientists that engineer and adapt this kind of transition because it used natural language. Average scientist knowledge can last 20 years but computer domain probably 6 years or less in updates.
- Training is going to be throughout the process. Different from other agencies missions where there are legal concerns. Even in our realm every dataset that we have and use probably has a bias in it. We work through that and do that normally. Starting to have discussion about before we put data into models and train with it what are the ethical implications in the data being used or what's missing and where the other location of that is. Goes back to workforce thing. Need to prepare for it and can't just be “here's a tool”. Will miss big picture stuff like larger ways you can implement it to help you out. Support you mentioning that all over the place.
Session 7B.2: Strategic Goal Planning
- [Frank and Gopal] Any way we can assign numbers or actionable/quantitative goals to number 4? More ambiguous
- Not more ambiguous. Look at appendix for strategic plan - they listed targets. They were developed by our operational folks to be reasonable but they should be revisited and called out bc will be major effort in next 5 year.s Whoever wants to be involved in strategic plan dev from the community. In 2019 plan we had those goals and had 6 key strategies. One was to advance HAFS. Big bottleneck is what we heard from operators, like HAFS JEDI transition. If GSI is going away this is major push. Daryl gave passionate plea that he needs help. We react but need to be more proactive. Second is improve probabilistic guidance. Mark talked about TC something and we heard about DESI and machine learning and WTBC. That strategy and adding machine learning is a pathway we have to take. Need to quantify these metrics so that I want to get from the operators what they need. Talked about reanalysis. To me, this says DA. Says we have to do JEDI transition before we start doing reanalysis but plan for what that means. Need to do some work before we jump into reanalysis bandwagon. Communication with risk and uncertainty was third key strategy. Heard about triangulation, talks from castle and stakeholders about what they're looking for. Tropical roadmap pictures needs to be worked into next 5 year plan. Jess has good ideas but no support from us yet but links to got problems with probabilistic guidance. Will impact what social behavioral science and how we link that. Can we do scenarios? Yes like cluster analysis in machine learning approaches. How do we branch out? Public messaging would be difficult. HPC - not as much as the challenge moving forward. Need particularly given some of the ideas of ensembles etc. DTC EPIC transition something we need to navigate. Containerization working with the DC to support our training the training modulus, need to figure out. Need to have strategy table. Don't want to lose the community. Going to have a look at UFS and UFS R2'O and bridge into that. On intensity we had some issues esp with OTIS, like where we needed to look at xyz, those are thing we need to tease out bc they will inform our HAFS devs in the future. Phillipe and Otis stood out. Have some ideas, testing to do, think that's important. HAFSA was worse in the Atlantic but better in EAst PAC. Diversity is something we have to address. Need to think about diversity in initial conditions, not just physics. One of the areas where they have sand beer different is how they treat when to turn the eye one and off. Potential area of interest. Let's start HAVS B with a diff center. Hillary and Idalia forecasts were good track and intensity. Worse track for weak and sheared storms. VI issue? HAFS A/B better than GFS overall. Need to work more with GFS folks.Been reacting to GFS and need to work together on changes. We're all driven by GFS. R34 high bias exists in structure. EAch hurricane specialist does R34 a little diff in operations. When it comes to best tracking it's OK. Need to call R34 the same as what modelers are calling R34. a little more formality in defining those things so comparing apples to apples.
- [Gopal on moving forward] We welcome everyone in community who has an interest in the work. Want to make sure to address issues and how we are testing. Will be useful for EMC and forecasters so we can improve HAFS. Do we want to develop a plan?
- We find issue is first order we need to do then we find a solution. At the moment especially with PHillipe we don't have a solution, or with Otis. Would like everyone to stay on those topics and presenting the hurricane conference. We will also attack some of the problematic forecasts. Would like everyone here to present their solution during hurricane conference.
- Looking at ocean responses and model physics. Pretty good ideas how do we address the issues. Phillipe and Otis we don't have clues right now and would like to collaborate with HRD and other universities. Some conclusions and results to presetn at next year AMS meetings. More track problem than intensity. Before we fixed track can't talk too much about intensity.
- Have to be at track of rain and the intensity so you need the money. Similar models, can see from the statistics. Some diff in the initialization.
- Early on have other models deciding continuing for release world or target north.
- Do notice that for Philipe track forecast running ensembles, there is large uncertainties. Could be predictability issues.
- If it was by chance you would have seen many of them. Had to be systematically better with almost every cycle. Will do more work on Phillipe. Answer has a lot to do with how HAFS A/B has diff initialization.
- [Question for rank] why time based improvements, was there any perspective or matrix regarding localization or spacial improvements?
- Can't remember. Strategic plan was 12 people. Track is critical to everything we do. In terms of localization we knew it had to be dealt with.
- [Phillipe Question] Errors for both track and intensity. Vertical structure realy important for getting correct. HAFS A diagnosed structure better. Removed and tilted from mid level cortex. Substantially improved HAFS b because of tilt, so far tilted downshear, stayed tilted for sev days in a row. Models that were immediately tryingto make the system vertically aligned were taking to the north and intensifying like GFS
- Concentrate on what physics diffs are between 2 models and what would lead to one model not having correct particle structure and why one does. Diagnosing what the physical reason ais and why move one one way. Depressing that one did and one didn't but encouraging bc so similar that you can isolate and test them and see what is diff and why.
- Intensity forecast of one storm impacted the track of the other so could be the case with the higher resolution nest switching compass both storms or multiple moving nests.
- Performance diff within basin. Figure out how we can provide better guidance where larger/smaller guidance, they don't have a lot of guidance right now. Better communication about what biases are, limited by sample size. Dabbling in HRD but need help.
- As we progres with each of the problems if you could share in the HFIP monthly meetings final target will be the AMS meeting, if we can share what's coming out of the research that will help.
- Because of the annuals in Nov, Dec, the next one won't be scheduled until February. Please rach out to Aaron and Will to get on the schedule.
- AOML website and EMC website go there and tell us the problem.
- Modeling priorities - are these only short term? Let's focus on short term plans. Will also help STI and UFS with this plan. Let us know if there are objections to this plan.
- [From Zach] Moving nest second wor is very important for nesting environment. Audience rapid intensification is hurricane model resolutions. If we have these capabilities can increase something resolutions.
- Otis at least so far in 3km form so will do it further refinement and working on TK skin. Still point to resolution.
- Flexible refinement. Discussions with GFDL. May not be an issue. We do back and forth interpolation between vertical grid to other grid. Velocity well taken there for us. Vertical adjustment process more conservative. Where we will gain something we will be using something else.
- JEDI. issue will be hashing out with Frank. Expectations and challenges. Is 6km basin with 2km mult nests possible for HREX next summer? Is uniform kk global DA sufficient?
- Depends on development. Conversation with gus. Possible for real hammer costs but if thinkignabout operational impolemetnatoio still lots of issues to be solved. Higher res ensembles again down cloud reduction like cloud resources. Two choices dependent sources.
- Want to pursue one of these options. May need to reduce number of cycles from 4 times a day to 2 to reduce expenditure. Working with Radhika already.
- For ensemble on cloud this year we ran something quite stable. Next year will have higher 3km can try to reduce to 2 cycle and split into members. They plan to pull dev from the SB parking in real time so we have to process.
- How did it do in comparison to GEFS or another baseline ensemble system?
- Limited by perception that ESG grid can only be so large so can't cover entire NHC area of responsibility. Approach is to do basin centric domains to start. Anyway we can hook up with what's happening on ensemble or DA side will do that as well.. Think about how we can phase with other devs going on as well.
- Speed spot as a miniated at 6km res which is being done and then global analysis is to e thought out to be done at 6km
- Should explore what the global system can do to make our job on hurricanen side easier. Example: what can you live with from GDAS? At what point does GDAS and EKF become sufficient? Need to run our own self cycled system in house. Makes life easier for everyone, reduces complexity. Broader question: what can change on the global side so we can do our job easier?
- What can we expect next year bc zach is already running 6km DS system.
- At least 5 years. Global transition from GSI based to JEDI based has to happen first. Then we can talk about 6km GDAS systems. We're probably talking about 5 years.
- Idea open to discussion. Shouldn't wait JEDI transition before we move forward. If current system can run 6km then run it but if resources not available, then not a question.
- Context of doing 6km is more in line with doing GFS 18 and also JEDI transition is also on target for GFSv18. For us and global applications and other UFS applications JEDI transition needs to take priority before we start exploring expensive and futuristic configurations.
- Plan? Pathway you would take?
- Can do RT experiment for original rights you have to wait.
- May be a convergence in about 5 years when we do v4/5 of HAFS and we should be setting table for this. How do we get there is the question. Two suggestions.
- Any global DA has dynamic physics. Gray zone.
- Forecasters are not going to wait 5 years for improved forecast
- DTC taking all approaches into account, will be based on something more centric and start to bring in large scale DA. Keeping options open to be as flexible as possible when things are developed.
- Some things for DA. One thing is we want to adopt subcycle DA. Doing that with HWRF but not with HAFS but want to do for V2. 1.1D experiment will tell us, have not had chance to fully evaluate. And compare to initialization which we are getting in the last one. Woud tell us how well the scheme or method worlds and what weaks are needed. Customer signing in context of building the hooks for the large domain of mult nests. Other things we can try. Talked about mutliscale data and introducing into HAFS. Some stuff we can still work on but in terms of having change of DA strategy overall better off talking aout JEDI transition first.
- Our job is to minimize death and destruction. What causes d&D? Swath of wind over water and land, wind drives water (surge), if we say 20% chance of something happening, we need there to be about a 20% chance of it happening because credibility depends on it. Doesn't have to be a number, could be “likely” but make sure that number si representative of actual risk. Need to start routinely verifying radii, and looking at things over land more because that's where things happen including rainfall as well.
- Not possible with deterministic model. That's why number 6 is at top of that section.
- Data mining of ensembles for best usage. Need novel approaches to use of the ensembles to get the most out of them.
- Provided all ensemble products on website. Do these satisfy or do you have any requirements for us?
- A lot of products on webpage assuming skillful and reliable would be helpful in terms of quantifying the uncertainty of that swap of forecast.
- Start thinking know about how to present ensemble data in terms of something we can put in AWIPS and local office can use it to make local warning and collab decisions within office we're suing. A starting point is basic wednesday probabilities. There may be a day when the neighborhood forecasts are not using the web a whole lot. Anyone can get out there now, especially anyone inside.
- Do we need to build a plumbing right away?
- Talking about in RT. Not about operational set up for hurricane system. At echo we have big gap. Operational system have system with higher resolution. Not there yet.
- In context of providing probabilistic guidance we are models in which the forecasters are confident. What is possible in AWIPS? Not just NHC, all centers are thinking about what is possible with AWIPS now that AWIPS on the cloud. Here will have other customers especially in talking about surge and coastal weather forecast. Everyone wants to know this information. There are going to be other issues in context of what kinds of products we can integrate. Our webpage is dedicated to generate products to eval science. What products we generate and provide to forecasters and decision makers is separate issue. Techniques being looked at a lot to come up with probabilistic guidance.
- John Martinez is HFIP model diagnostic technician. In early stages taken sample hurricane data off the cloud and starting to prototype some products that could get in frnt of forecasters. Early stages.
- What's required?
- Working on putting model up with scifi side with observations that are collected from key field program and outside field program too. Ensemble products. Can get started right now thinking about 4 member hurricane ensemble half speed. Connecting lead times left to right but thinka bout something familiar and reframe so we can thinka bout what range of possibilities are and what the pediatric you have enough numbers that are in that range.
- Top priority is making sure ensemble spread is appropriate relative to expected error. If we get a bunch of cases like Lee then forecasters aren't going to use it. If it's not tuned appropriately it doesn't matter what the products are. First need getting ensemble that captures range of possibilities and is well tuned.
- Trying to build a detailed surface wind analysis. Gotta be tied to DA. What should we be using in ground truth evaluation and initializing storm surge models? Opens up possibilities of HWRF in shorter term we could get team started and pull knowledge and data sets.
- Jim Nelson showed how HAFS did. Jim and WPC are interested in doing post modeling HAFS A/B forecasts in a few months after winter. On rainfall will be getting useful feedback on HAFS v1 next year.
- Brian cole- for new observations want to point out for next strategic plan we have 3 nice concepts of observation gathering. None of which have path towards reg use or operations. Encourage future strat plans to look at how we're going to evaluate those and which ones meet whose requirements.
- Praise for probabilistic wind products. Also debuted this year probability of diff presentation thresholds. We'll need to go back and help verify if those probabilities are well calibrated.
- Being able to continue and have more research on DA side, doing it ad hoc, technically not funded anymore. Would be nice to get DA money for new observations.
- Stop funding equipment and fund DA (haha)
- HFIP should be a part of improving observational networking in coastal areas. Would help with modeling verification and monitoring landfalling.
- Frank Proposals
- At least 2 tiger teams. Asking people to take charge. HAFS DA transition. Develop transition and coordination with GFS and RRFS. Suggest target 3 FY25. Leverage it on the JTTI to support DA that we heard about. Should that be our target? Daryl volunteered to be part of that tiger team and Jason to be co-lead. Second one has to do with uncertainty. Develop proof of concept. Provide storm specific ensemble models statistics to produce the pdfs that they need to drive their operational problems. Work with them on how to use the data. Evaluate how that is. Mutli models give better spread. Asked Wallace to be lead and get researcher to work with them. Two tools we can take advantage of to develop products. Get them data to inform the uncertainty.