DTCC - The Future of Coding with AI
Discover Innovative ways to use AI and other emerging technologies to benefit & advance DTCC’s goals
Let’s explore how we can convert some of what’s done in NYU w.r.t AI/ML in software development to help launch our VIP research program
Start with fresh coding with GitHub Co-pilot X, GPT 4 and run it thru the entire Software life cycle development (SDLC) process with the tools we use in production like SonarQube (code quality), code checker (for copy write), etc.
Start with coding translation and then use with GitHub Co-pilot X, GPT 4 to write new codes based on the new language framework & use its features and run it thru the entire Software life cycle development (SDLC) process with the tools we use in production like SonarQube (code quality), code checker (for copy write), etc.
Publish a joint research/ whitepaper for defining the developer efficiency using AI and automation in SDLC for the Fintech industry
Majors and Areas of Interest:
Someone who willing to explore innovative ways of using AI enabled solutions in real world enterprise developer ecosystem in fintech industry
Methods and Technologies
- Copilot X
- GTP 4
- Code Checker
Related Grand Challenges:
- How does AI/ML & Automation impact Developer code velocity, is it substantial enough to be adapted by enterprises in their application production
- How does AI/ML & Automation impact Developer learning curve in picking up newer technology/language
- How does AI/ML & automation impact bugs inject by AI/ML into the coding? How can we proactively avoid it? What should be the checks and balance that needs to be introduced as part of the code release process?
- How do you quantify the impact of the risk injected by AI/ML into the coding? How can avoid risk at a policy framework level? I.e., how can I be declarative about it when I give the LLMs certain guidelines to avoid it proactively rather than at a code level.
- How does AI/ML & Automation impact Developer behavior in general
- What’s the likelihood of the running into copywrite issues with code? What is the framework to have guardrails to avoid it?
- Can I harness the power of LLMs and further train it for our internal codes to make it “DTCC” custom LLMs model for our developers?
- What “grounding” rules need to be laid down for information ‘to and from’ the LLM model? How is Microsoft approaching it and can we have a framework like that within DTCC? How would that grounding framework look like?