building datasets

#2
by LeroyDyer - opened

i think we need a long list of questions ¬
with some existing answers and context :
These should form the base of our distilation dataset :

so that we can send the questions to Qwen / Sonnet / ChatGPT / DeepSeek :

Right now i know they do not have the right answers but if they have some type of wikipedia / research mcp plugin enabled then they will have access to som online search and internal reasoning : (we should-not-provide-context)
We need to be able to distill thier responses and thinking ( trains of thoughts and raising the probablitys of these questions with alturnative responses )...

so we can get some distilation datasets :
After merging the datafile with each models answer in line we can then send it to another model ( asa judge ) to select the best answer based on our TRUTH context ... creating another flavour for our output :

We should make our Q&A Multi-lingual so we should convert all questions into xhorsa , swahili, hauser, banbara , etc so that we have our answers also generated in these languages : (so for-1000 questions ) if we have 10 languages then - 100000 questions(shuffled)

we ask in all languages shuffled for our final dataset , but also because often the models may think differently in different lanaguges and they may even have internal ( gaslighting ) guardrails we need to overcome 😀

we may also need to train or use upgraded models such as cosmo, kimi , glm as distilations ! ( locally ) to produce more flavours of response ¬

the models need multiple versions of the truth, as it will select the truth with the highest epochs ... but we do need altrunatives to keep the models honest and able to reason in truth regarding history there is no one truth ( there is ) we coudl say no actual truthfull litteral source which is non biased, but to produce the discussion we do need the biased accounts also so we can discuss them , so explorers diarys and ships logs and etc we should also add to our reasoning

My models ( main ( mythbuster.GGUF ) ... has been trained on our knowledge and is the most stable ) i use them for my roles for historical research as well as biblical research , so they have been trained with many sacred texts and bibles !

I noticed that people have not trained their models with religious knowledge and historys of mankind or the phylosophers etc ! as they really use this a smoke screen ! and they do want the models to have no morality !
as wll as keep thier colonialistic perspectives in place...

they are also begining to Dumb models down for the public !
we need to stick to Chinese base models as they are Impartial ! ( a Raw perspective )

anyway thanks bro ¬ ( i maybe offline for a bit as i moved closer to the coast of tanzania in the south .... im a bit off grid - sparce electricty !! lol ! ) ... hence i rely on my models a lot and will be donating them to schools in tanzania so hence needing to train the model in our languages , with good and bad grammar ! and slang and dialectal speec patterns ,

with some q/a we can do direct translation of answers , we shoudl be transalting from swahili to shona and vs versa ie translating between african languages ... so the model can also dicern our patterns .... this is what we are tring to train into the model .. the patterns of our speech and thoughts and languges and historys , we will need some biased single perspective african historys also ie oral historys .. ( undisputed )

Sign up or log in to comment