如果你觉得跑不动了,可以仰望头顶的天空༼ つ ◕_◕ ༽つ -leo once said 哈哈哈,这是大一院运动会800m比赛时候一个童鞋教我的,每当我在塑胶跑道上跑不动的时候,都会想起来
最近有点迷上了毕业季的音乐,大概是十几年的肌肉记忆,又是一个三年之期,就像初中毕业的时候迷上《再见再见》,高中毕业时候喜欢《不说再见》...
害>︿<,折腾了很久的ChatGPT机器人还是夭折了,看了一下日志,它只在服务器上不眠不休陪伴了七天 本来想吐槽一下QQ的协议怎么就那么rigorous,想想还是算了,安全嘛,可以理解🥹 Anyway, making progress就好😊
既然睡不着就放两张图片镇楼,希望晚上蚊子少点╥﹏╥...
第一张解放公园的花,不知道啥名字
第二张站在黄鹤楼上眺望长江大桥
下面是一段人畜无害=>hjs的测试代码👇【GitHub主题】
def main(args,phase):
#【1】 Init dataset of synthetic images from (real | noise)
#【2】 Init [lr, loss_func, optimizer]
#【3】 Init first buffer from one randomized buffer file // each buffer has 10 teacher model trajectories
#【4】 Prepared to train and evaluate
for it in range(0,args.Iterations+1):
#【4.1】 eval and store at a regular basis
if it%100 == 0:
# Evaluate synthetic data with diff model architecture
if it%1000 == 0 or save_this_it:
# save current best synthetic dataset
#【4.2】 prepare to train syn_steps on student model
Init student model and select one teacher from current buffer
Init [start_epoch,starting_params,target_params,student_params]
for step in range(args.syn_steps):
get batch-synthetic images
compute grad and update forward_param
store new forward_param in student_param
#【4.3】 compute loss of one expert's tm and optimize syn_img
compute and store param_loss as well as param_dist
grand_loss=param_loss/param_dist
grand_loss.backward()
optimize syn_img and lr
Leave a comment