Half wayish through the seminar 2

This commit is contained in:
1jamesthompson1
2025-09-22 17:37:35 +12:00
parent 3344bc0c7a
commit 240de0e11a
6 changed files with 55 additions and 5 deletions

View File

@@ -0,0 +1,12 @@
AI for Demand prediction in Retail and E-Commerce
This group is talking about using AI methods for demand prediction in the physical retain and e-commerce setting. They survey 7 papers that cover statistical methods, classical machine learning methods and modern deep learning methods. The older statistical methods are used as baselines and provide much clearer interpretations and understanding of the assumptions to the model user. The move to modern DL methods have allowed the model makers to the opportunity to give more varied input information (weather, economy, etc) to these models as opposed to just simple sale history. It does seem from this survey that classical machine learning methods are more prominent (and effective?) then modern DL architectures. Interestingly it is hybrid systems that outperform others in many of the studies, these hybrid systems are either ensemble in nature or segment the problem into multiple steps. I would of thought that given the scale of the data available some novel DL architecture could be constructed to maximally use both the data and compute power that is now available. I find the concept of the GNNs to be of most interest, specifically as it could relate to private federated learning to allow various companies "share insight" to maximize efficiency.
Seven papers; supermarket demand, dl packaged good predictions, survey where modern is better, comparing trees and lstm, clustering prior to regression, spatial based GNNs, comparing models on walmart data
Questions:
- Are the models working by predicting the amount of demand for a specific product in a time period. Are many of these models trying to predict individual customers purchasing habits?
- In the paper comparing different models on the Walmart data it seems to me that the performance gains are trivially small, am I missing something?
- In any of the papers were these models deployed to be used by the actual companies that would need this sort of data?
- It seems to me that this time series data would suit itself well to continual learning or even stream learning (for short term predictions) is this idea used in the demand prediction field?

View File

@@ -0,0 +1,11 @@
Ethical Dilemmas of AI Humor
Team 10 looks at AI generated humor. In particular they discuss what concerns we should have when thinking about this application. There are many concerns they bring up but I will highlight some that I feel are the most nascent.
The feedback loop of AI systems and personalized humor systems, the feedback loop from regular AI humor exposure allows many of troubling mechanisms (reinforcing stereotypes etc) to have a run away effect.
The challenge of who to assign blame or credit is difficult, I feel that for many modern systems the negative consequences of the systems aught to be laid on the person closest to distributions. I.e it is the person who shared the offensive joke written by OpenAI, it is the Twitch Channel that let the AI wild without enough controls. As the legal and regulatory framework adjusts to the new technologies there is great opportunity for it to setup good or poor incentives for safe development.
Lastly I think the regulatory challenges or policing AI and content creation is a fine edge of doing it properly and correctly to pushing your own agenda. Someone needs to decide what is offensive and no one can really do that but ourselves therefore it is prudent that we not let a handful of companies decide what we globally find acceptable or not.
Question that I have are:
- In the research of AI bot generating humor are they all transformer LLM based?
- What say you on the Seinfeld situation, should it of been cancelled?
- Did any of the papers test to see how differentiable the AI and human generated humor was?

View File

@@ -17,9 +17,9 @@ These ones are in August and each seminar is only 25 minutes long. There are two
These ones are in September. Each one is 1 hour long. I completed 6 seminars. These ones are in September. Each one is 1 hour long. I completed 6 seminars.
- [ ] mon 15th sept demand prediction team 2 - [x] mon 15th sept demand prediction team 2
- [ ] tue 16th sept stock price prediction team 6 - [x] tue 16th sept stock price prediction team 6
- [ ] mon 22nd sept humour team 10 - [x] mon 22nd sept humour team 10
- [ ] tue 23rd sept safe guarding infrastructure team 3 - [x] tue 23rd sept safe guarding infrastructure team 3
- [ ] wed 24th sept classroom team 1 - [ ] wed 24th sept classroom team 1 (partly done)
- [ ] wed 1st oct crop management team 8 - [ ] wed 1st oct crop management team 8

View File

@@ -0,0 +1,8 @@
AI stock price prediction
Team 6 did stock market prediction using AI which is unique in the sense that predicting stock market prices has been happening since long before computers. I like to think of the stock market as a decentralized federated learning process that does a really good job of predicting the price of a stock (most of the time). Therefore to beat the large federated process you have to have special information (usually illegal to use) or use the available information better. As one might expect AI systems can be better at predicting stock prices due to their ability to remove emotion and add precise mathematical evaluation. With deep learning based methods having among the best performance across all the papers surveyed by the team. The most interesting avenue that I see is that of sentiment analysis, mainly due to how it adds an order of magnitude more external information to the prediction process. The ethical considerations of stock market predictions are interesting. Particularly from the regulatory stand point as their are various rules in place that are supposed to level the playing field and make it a fair game. Yet large player have an inherent advantage simply due to the scale of resources as well as the actual value they can move around. I see these AI systems simply increasing the power of larger players to outperform retail investors. In the profit maximizing world of stock markets and investing I am unsure that an ethical approach can be found (bar scams and fraud) as it is inherently about beating the other players using whatever means (beside some ground rules).
Question and clarifications that I have are:
- With the ChatGPT sentiment analysis how did this work. Was it along the lines of "Here is a news piece please return a number from -1 to 1". Was there investigation into the accuracy of the sentiment scores given?
- Will the EU AI Act effect companies using these models to determine customers investments. I.e force companies with the jurisdiction to have "investment explanations" ready to go.
- Was the ChatGPT sentiment analysis deployed and profited from prior to publication?

View File

@@ -0,0 +1,10 @@
AI and safeguarding critical infrastructure
Team 3 looks at AI for monitoring and protecting various infrastructure. I see that there are three groups of applications.
The first one is the use of Computer vision to augment video collect and labelling. Doing this on both waste trucks to detect roadside problems and tracking people on surveillance cameras seems dangerously dystopian (even having "optional" privacy features). Team 3 discuss what I see as the biggest problem which is having all of the data with personal information (faces with times and place). As both papers suggested having edge on device inference would mean that only "safe" data would ever leave the device (for example "broken stop sign at 11 Brent road"). It is much easier to manage a dangerous thing (personal data) if it doesn't exist in the first place.
The second group is that of automated monitoring of large infrastructure sensor suites. It seems quite reasonable that using AI to monitor these vast data streams would be more effective than classical statistical methods or manual human work. Compared to these other two areas it would seem much safer to implement these assuming we have valid assurance performance is increasing.
The last are is that of using AI modelling capabilities to understand climate change and sea level rise. It seems that these systems are effective. However given the complexity of any climate modelling it seems to be of great risk these systems predict/suggest either too big of a response, or too small of an response. Both of these could have large real world costs.
Some of the question I would have are:
- What size model was used for the detection of roadside problems and were these models small enough that it would be plausible to use them on edge devices.
- What were the absolute response times for the cybersecurity paper. IS it on the scales of days, hours, minutes.

View File

@@ -0,0 +1,9 @@
AI in the classroom
AI for students. Help student do work, help student learn with tutoring, simulations as specific form of tutoring.
AI for teachers, plagerism detector, automatic marking, student monitoring, misc topics (admission, assistants etc)
Custom leranign content is good. Automated grading is better. Interatice learning and tutors is good (although might not be as effective).
Ethical concerns: personal data, change role of teacher